my christmas viz is made andddd ram has it too built
high nodes dont store text, are just pointers to rebuild it or activated by such!
sequence matters! left child | right child to recognize he or eh
can tranvserve up/down by pointers, each nodes knows its kids/parents
searches by existing parts, then reuses and builds
punctuation helps, finds biggest can then reuses them to thennn build
bond order/segmentation/Byte Pair Encoding
'isa' and 'is a' are saved different by butterfly effect.....solves sentence issue too
when it searches it is best to have all go up parallel or not i.e. look for (of input 'hello') h, then h+e, then he+l, hel+l, and so on....and build like that as well for simple code
is a white box, interactive, has visual viz, scales/Online Learning, simple, unsupervised, RL
(We give it installed pop up questions like how to improve HDDs or cure cancer, how to get body or visual cortex, it learns off internet big diverse data and unifies all, it uses Sequence Prediction/Translation/Summarization/Segmentation to answer its questions and create the next questions to chain Forward by using likely good building block answers based on large context bag vote)
to narrow down search space, it might want to chain Backwards in dead ends. Then informs us once satisfied to global context!
Translation can do summarize/expansion(aka superresolution) also
(can change book from french to english or change style or clearity, summarize it, expand/inflate it, entail it from front/back to extend it, can change topic, or change cat to dog as alternative words but close meaning! Or make a book from a video. AGI generates Similar data/music/video to what you want, or ask it for or put in as a prompt)
Can do vision or all sensory! Better more diverse data it has. But text is simpler and quantinzed. It can mod its own code later.
Can jump along my hierarchy/heterarcy > lawyers are men are women who eat food made in a factory, a factory is big, big ball on the roof, big ball here, big ball in my bed
*****can use relations to find most general/compressed truth/structures/etc
*****when new phrase comes in unseen, it can decide its bond order (hence structure) by how related phrases are built
*****upadate default (like GRU)....deambiguitation....default structure.....use 2 its by context to call certain node part in story/working memory
It can use known knowledge to decide a unseen phrase's truth, rank, shape, color, what it is, structure/segmentation, alternative words........using heterarchy translation activation winners
attention types, asking ouselves/others Questions/telling Answers....etc
can pick where to look, how wide to look, recognize it, see what comes next, adapt it in to fit in
the ends that prove 2 nodes are similar (cat drinks, dog drinks....what about cat drinks, dog slurps up), can do it too...woah fractal!
summarize so you can work with it better, recognize other things, attention key areas
i am pro italian and speak fluent _
old knowledge/desires is Generator and Verifyer at same moment/process, creates related new goals/data and is saturation sliding/transfer
we don't generate hjfjkfkhhh or generate was me and cat is up there was her purse sold.....truth is bias/global context...skips search space, uses building blocks and translation by context attention
regenerate it out and regenerate missing data
data evolution is data recursion, Transformer architecture
using popping up node desire questions it imitates/talks to itself and is a researcher.......must learn model of world then skip through it
all sensory/Earth is made up of parts, sequences too....all has patterns from particle physics....all data is language/has reocurring 'words' man made up quantinized to help descibe common frequent features
text has a lot to cover, deers have ears, mouse is behind garbage can....but it does it so as text, context similar will activate it anyway