Human AI Net - a Human and Artificial Intelligence network



#whyHumanAiNet
A bizarre kind of games and AI research. Why? Computers are where we keep the parts of our minds that dont fit in our brains. Used to be pen and paper and drawings. To explore how minds work. For millions of people to do it together in games and experiments in those games. To teach eachother how how this gaming and research process works and anyone take it in their own direction (from inside the game or with more skill using the #opensource code) if they disagree with how others do things, and to combine what we learn in some of these separate branches we explore. To find or create bigger minds made of many of our minds and AI minds, in this process of building things together and branching and merging parts of it, as usual in #opensource. To form bigger minds (or thinktanks) made of many minds and see where it takes us, especially in the games we create that way. If that sounds fun or useful, you'll want to try it in the #opensource network of many computers going up at http://humanai.net and hopefully many other independently operated websites, which will be designed to try to work together anyways, after the core code is working better. Its important that nobody can control the network except their small part of it, which protects our #opensource freedom to take it in our own directions when we want and to merge with what others have done if they publish it and we want to.
#gameful
#prilistPrilist

#theForum
For those who want to talk about #humanAiNet, like what we can use it for or the progress on building new features: http://reddit.com/r/networkMindsTogether
#prilistPrilist

#humanAiNet
#opensource GNU GPL 2+ (while #humanAiCore also allows #opensource LGPL) http://sourceforge.net/projects/humanainet for downloads (including sourcecode in the executable jar files that you can unzip), or https://github.com/benrayfield/humanainet for source code
#c
#humanAiCore
#prilistPrilist

#smartblob
For many people to explore how minds work together, we need a simple game thats openended in what it can become, that we can play and experiment with simulated minds (AI). These minds will be put in smartblobs. A #smartblob is any 2d shape and a mind that feels how others try to bend it, sees outward, pulls on things at a distance in each direction, and learns how to do that better over time as we play with them and they play with eachother. They will bend twist and use eachother as tools. Each #smartblob chooses its own shape at each moment including how much it pushes back or allows other smartblobs to bend and twist it into other shapes. They will be able to jump, climb, roll, throw eachother, play kingOfTheHill, race, and teach eachother new shapes and behaviors by example. It will be an openended exploration of all possible 2d shapes and behaviors that could become many kinds of games, and some of those games will be all in the same space that millions of people play and experiment in together. This will be both bizarrely fun, surprising, and educational about how AI works and gametheory especially #localMaxVsGlobalMaxOfNashEquilibrium in how people interact with eachother on a large scale to agree on the rules of these games in different parts of the shared space and how the system it lives in evolves with these changes as its all #opensource. Theres never been anything like it.
#localMaxVsGlobalMaxOfNashEquilibrium
#onScreen
#prilistPrilist
#(other/empty)
#howToFeelAndPlayWithAiOnScreen

#occamsRazor
The simplest theory or design that works well should be preferred. Works well means you're not sacrificing anything big by further simplifying. The trap of software complexity is its so easy to drop in parts of other software, without knowing how they work, that eventually nobody knows how it works not even its creators. The complexity is amplified by, since nobody knows all the code in it, similar code is added instead of generalizing good code already there. A system can be both small and advanced. It takes the extra work of understanding every part before adding only the smallest needed parts and preemptively fixing any possible problems based on that understanding.
#prilistPrilist

#fluidMusicTools
#prilistPrilist
#howToFeelAndPlayWithAiOnScreen

#weightsNode
a mutable #datastruct used locally for fast running things, but only stored or sent through network as #acyc, that works for sound effects (as manifolds of vibrating springs) and neuralnets and cellular automata. The weights between eachother and scalar numbers at each are connected in different shapes. The main difference is the function updating those scalars based on connected weightsNodes. I have this working for boltzmannMachine, 2d fluid, and sound effects. It greatly simplifies how these things will connect to eachother for them to use the same #datastruct.

#acyc
Acyc is the main #datastruct, a very simple way of organizing info that can hold anything on internet, games, music, pictures, text, and especially the kinds of things #lispLanguage does. You start at the bottom with a point we call nil or end. It doesnt go anywhere else. ... The first object is nil, the only leaf. Every object is made of 2 lower objects, which makes it a forest. Therefore the second object is pair of nil and nil. ... I write nil as dot . ... I write pair as in parens. Pair of nil and nil is (..) ... The next 2 objects I arbitrarily define as bit0 and bit1 which are (.(..)) and ((..).) ... listPow2 is a linkedList of, at each index, either nil or completeBinaryTree of depth that index, so the nil or nonnil are the base2 digits of the size of the listPow2. I use the list structure itself as the integer of its own size, and push and pop add and subtract 1 object at an average cost of 2 objects. Random access reading costs log time. ... I'm planning avl trees for log time of writing at the cost of having many possible forest for the same tree contents. listPow2 has only 1 form per content. ... A listPow2 of bits (viewed in blocks of 16 per char) can be a unicode string. Each char's tree of 16 bits, and power of 2 aligned adjacent chars, exist only once because of dedup. ... Typed objects are defined as (typVal (aType aValue)). typVal is an arbitrary forest node. aType and aValue can be anything. ... keyVal is the type of a key/value pair. (typVal (keyVal (aKey aValue))) ... listPow2 of keyVal is a stack of keyVal, called an eventStream. It can remember versions of all or some changes to vars. I'm planning some caching to avoid the linear lookup of old versions. Think of this like a blockchain for var values. ... List, event listener, string, number, namespace, and openended expandable as shapes of forest, all done with a single anonymous immutable #datastruct or you could call it a kind of number. A language is not truly functional if its variable names (as unicode bits) are not derived from functions.
#acycPair
#acycPartPacket
#lispLanguage
#datastruct
#prilistPrilist

#acycPartPacket
Acyc is normally stored in an array of int64 which are 2 int32 that each point at 2 lower places in that array, proving its acyclic aka a forest. Through secureHashing, every #acycPair in every forest has the same hashcode if its the same shape of forest. That means we can cut out a part of any forest and send it to someone else as long as they have up to the lower parts where it was cut, and they can verify, as certain as the secureHash algorithm (normally SHA256) isnt cracked, that each #acycPair has the same forest shape everywhere we send that #acycPartPacket. The data cant be faked if the secureHash algorithm is actually secure. We only need to send the SHA256 values (256 bits each) for the places we cut the network around the borders of the #acycPartPacket, not in the middle which are int64 which is much smaller than the 512 bits it would be to name things by SHA256. This protects the data integrity in the public space because many computers will have different combinations of the shared forest. Data thats used more often will exist in more copies, always having the same hashcode. Data is equal when it has the same forest shape and does not depend on address in any array which can differ between computers. #parallelHashConsing can be used for extra safety against any one secureHash algorithm being cracked.
#parallelHashConsing
#acyc

#parallelHashConsing
To protect against the possibility that any one secureHash algorithm may be cracked, which would allow #acyc and #acycPartPacket to be faked as appearing to have different data than when it was created, multiple secureHash algorithms will be allowed at once in parallel forests. Each #acycPair's SHA256 hashcode depends only on the SHA256 hashcode of its 2 childs. Its algorithmX hashcode depends only on the algorithmX hashcodes of its 2 childs. So the hash forests are independent of eachother and new hashAlgorithms can be added by anyone or any group at any time and without permission or knowledge of the rest of the network unless they choose to publish those hashcodes. Any such hashcode, if you have the forest below it, can be translated to other algorithms of hashcode. So its a very flexible system that will not go down just because a single secureHash algorithm is cracked, even if its the only algorithm in use at the time, because existing forest shapes can instantly start to be translated to a new secureHash algorithm. Its therefore important to have at least 2 secureHash algorithms ready in every version of the software, one the default and the other ready to start #parallelHashConsing (both at once, and eventually becomes main algorithm) when any 2 different data are found that have the same hashcode (which I'm not aware has ever happened in SHA256 but should be ready). It would be even safer to use 2 secureHash algorithms at once so when one is cracked we still have the other, and have a third ready to spring into action to replace the one thats cracked, and over time add more secureHash algorithms. Remember, in #acycPartPacket, these only take extra space on the borders of the #acycPartPacket while the middle, which is most of the forest shape and size, uses int64s which are computed by these secureHash algorithms only locally and not using network bandwidth for that part of the proof.
#acycPair
#hashConsing
#acycPartPacket
#prilistPrilist

#howToFeelAndPlayWithAiOnScreen
#smartblob
#fluidMusicTools
#prilistPrilist
#(other/empty)

#multiplayerComputer
#prilistPrilist
#prilistPrilist

#permutationsOfMirrorNeurons
#prilistPrilist

#contrastiveDivergence
#prilistPrilist

#localMaxVsGlobalMaxOfNashEquilibrium
#prilistPrilist
#smartblob

#eqxor
#prilistPrilist

#similarGames
#plink
#prilistPrilist

#bellAutomata
#onScreen
#prilistPrilist

#theGames
#prilistPrilist

#mindmap
#prilistPrilist

#neuralTool
#prilistPrilist

#weightsNode
#prilistPrilist

#bayesianCortex
#prilistPrilist

#opensource
#prilistPrilist

#onScreen
#neuraltool
#smartblob
#neuralPixels
#sparseDoppler2d
#bellAutomata
#prilistPrilist

#humanAiCore
#prilistPrilist
#humanAiNet

#(other/empty)
#prilistPrilist
#smartblob

#gameful
#whyHumanAiNet

#c
#humanAiNet

#acycPair
#parallelHashConsing
#acyc

#lispLanguage
#acyc

#datastruct
#acyc

#hashConsing
https://en.wikipedia.org/wiki/Hash_consing
#parallelHashConsing

#plink
#similarGames

#neuraltool
#onScreen

#neuralPixels
#onScreen

#sparseDoppler2d
#onScreen