Surprise! We've been running on hardware provided by BuyVM for a few months and wanted to show them a little appreciation.
Running a paste site comes with unique challenges, ones that aren't always obvious and hard to control. As such, BuyVM offered us a home where we could worry less about the hosting side of things and focus on maintaining a clean and useful service! Go check them out and show them some love!
Description: Releasing full AGI/evolution research
Submitted on December 25, 2019 at 08:56 PM

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294129512961297129812991300130113021303130413051306130713081309131013111312131313141315131613171318131913201321132213231324132513261327132813291330133113321333133413351336133713381339134013411342134313441345134613471348134913501351135213531354135513561357135813591360136113621363136413651366136713681369137013711372137313741375137613771378137913801381138213831384138513861387138813891390139113921393139413951396139713981399140014011402140314041405140614071408140914101411141214131414141514161417141814191420142114221423142414251426142714281429143014311432143314341435143614371438143914401441144214431444144514461447144814491450145114521453145414551456145714581459146014611462146314641465146614671468146914701471147214731474147514761477147814791480148114821483148414851486148714881489149014911492149314941495149614971498149915001501150215031504150515061507150815091510151115121513151415151516151715181519152015211522152315241525152615271528152915301531153215331534153515361537153815391540154115421543154415451546154715481549155015511552155315541555155615571558155915601561156215631564156515661567156815691570157115721573157415751576157715781579158015811582158315841585158615871588158915901591159215931594159515961597159815991600160116021603160416051606160716081609161016111612161316141615161616171618161916201621162216231624162516261627162816291630163116321633163416351636163716381639164016411642164316441645164616471648164916501651165216531654165516561657165816591660166116621663166416651666166716681669167016711672167316741675167616771678167916801681168216831684168516861687168816891690169116921693169416951696169716981699170017011702170317041705170617071708170917101711171217131714171517161717171817191720172117221723172417251726172717281729173017311732173317341735173617371738173917401741174217431744174517461747174817491750175117521753175417551756175717581759176017611762176317641765176617671768176917701771177217731774177517761777177817791780178117821783178417851786178717881789179017911792179317941795179617971798179918001801180218031804180518061807180818091810181118121813181418151816181718181819182018211822182318241825182618271828182918301831183218331834183518361837183818391840184118421843184418451846184718481849185018511852185318541855185618571858185918601861186218631864186518661867186818691870187118721873187418751876187718781879188018811882188318841885188618871888188918901891189218931894189518961897189818991900190119021903190419051906190719081909191019111912191319141915191619171918191919201921192219231924192519261927192819291930193119321933193419351936193719381939194019411942194319441945194619471948194919501951195219531954195519561957195819591960196119621963196419651966196719681969197019711972197319741975197619771978197919801981198219831984198519861987198819891990199119921993199419951996199719981999200020012002200320042005200620072008200920102011201220132014201520162017201820192020202120222023202420252026202720282029203020312032203320342035203620372038203920402041204220432044204520462047204820492050205120522053205420552056205720582059206020612062206320642065206620672068206920702071207220732074207520762077207820792080208120822083208420852086208720882089209020912092209320942095209620972098209921002101210221032104210521062107210821092110211121122113211421152116211721182119212021212122212321242125212621272128212921302131213221332134213521362137213821392140214121422143214421452146214721482149215021512152215321542155215621572158215921602161216221632164216521662167216821692170217121722173217421752176217721782179218021812182218321842185218621872188218921902191219221932194219521962197219821992200220122022203220422052206220722082209221022112212221322142215221622172218221922202221222222232224222522262227222822292230223122322233223422352236223722382239224022412242224322442245224622472248224922502251225222532254225522562257225822592260226122622263226422652266226722682269227022712272227322742275227622772278227922802281228222832284228522862287228822892290229122922293229422952296229722982299230023012302230323042305230623072308230923102311231223132314231523162317231823192320232123222323232423252326232723282329233023312332233323342335233623372338233923402341234223432344234523462347234823492350235123522353235423552356235723582359236023612362236323642365236623672368236923702371237223732374237523762377237823792380238123822383238423852386238723882389239023912392239323942395239623972398239924002401240224032404240524062407240824092410241124122413241424152416241724182419242024212422242324242425242624272428242924302431243224332434243524362437243824392440244124422443244424452446244724482449245024512452245324542455245624572458245924602461246224632464246524662467246824692470247124722473247424752476247724782479248024812482248324842485248624872488248924902491249224932494249524962497249824992500250125022503250425052506250725082509251025112512251325142515251625172518251925202521252225232524252525262527252825292530253125322533253425352536253725382539254025412542254325442545254625472548254925502551255225532554255525562557255825592560256125622563256425652566256725682569257025712572257325742575257625772578257925802581258225832584258525862587258825892590259125922593259425952596259725982599260026012602260326042605260626072608260926102611261226132614261526162617261826192620262126222623262426252626262726282629263026312632263326342635263626372638263926402641264226432644264526462647264826492650265126522653265426552656265726582659266026612662266326642665266626672668266926702671267226732674267526762677267826792680268126822683268426852686268726882689269026912692269326942695269626972698269927002701270227032704270527062707270827092710271127122713271427152716271727182719272027212722272327242725272627272728272927302731273227332734273527362737273827392740274127422743274427452746274727482749275027512752275327542755275627572758275927602761276227632764276527662767276827692770277127722773277427752776277727782779278027812782278327842785278627872788278927902791279227932794279527962797279827992800280128022803280428052806280728082809281028112812281328142815281628172818281928202821282228232824282528262827282828292830283128322833283428352836283728382839284028412842284328442845284628472848284928502851285228532854285528562857285828592860286128622863286428652866286728682869287028712872287328742875287628772878287928802881288228832884288528862887288828892890289128922893289428952896289728982899290029012902290329042905290629072908290929102911291229132914291529162917291829192920292129222923292429252926292729282929293029312932293329342935293629372938293929402941294229432944294529462947294829492950295129522953295429552956295729582959296029612962296329642965296629672968296929702971297229732974297529762977297829792980298129822983298429852986298729882989299029912992299329942995299629972998299930003001300230033004300530063007300830093010301130123013301430153016301730183019302030213022302330243025302630273028302930303031303230333034303530363037303830393040304130423043304430453046304730483049305030513052305330543055305630573058305930603061306230633064306530663067306830693070307130723073307430753076307730783079308030813082308330843085308630873088308930903091309230933094309530963097309830993100310131023103310431053106310731083109311031113112311331143115311631173118311931203121312231233124312531263127312831293130313131323133313431353136313731383139314031413142314331443145314631473148314931503151315231533154315531563157315831593160316131623163316431653166316731683169317031713172317331743175317631773178317931803181318231833184318531863187318831893190319131923193319431953196319731983199320032013202320332043205320632073208320932103211321232133214321532163217321832193220322132223223322432253226322732283229323032313232323332343235323632373238323932403241324232433244324532463247324832493250325132523253325432553256325732583259326032613262326332643265326632673268326932703271327232733274327532763277327832793280328132823283328432853286328732883289329032913292329332943295329632973298329933003301330233033304330533063307330833093310331133123313331433153316331733183319332033213322332333243325332633273328332933303331333233333334333533363337333833393340334133423343334433453346334733483349335033513352335333543355335633573358335933603361336233633364336533663367336833693370337133723373337433753376337733783379338033813382338333843385338633873388338933903391339233933394339533963397339833993400340134023403340434053406340734083409341034113412341334143415341634173418341934203421342234233424342534263427342834293430343134323433343434353436343734383439344034413442344334443445344634473448344934503451345234533454345534563457345834593460346134623463346434653466346734683469347034713472347334743475347634773478347934803481348234833484348534863487348834893490349134923493349434953496349734983499350035013502350335043505350635073508350935103511351235133514351535163517351835193520352135223523352435253526352735283529353035313532353335343535353635373538353935403541354235433544354535463547354835493550355135523553355435553556355735583559356035613562356335643565356635673568356935703571357235733574357535763577357835793580358135823583358435853586358735883589359035913592359335943595359635973598359936003601360236033604360536063607360836093610361136123613361436153616361736183619362036213622362336243625362636273628362936303631363236333634363536363637363836393640364136423643364436453646364736483649365036513652365336543655365636573658365936603661366236633664366536663667366836693670367136723673367436753676367736783679368036813682368336843685368636873688368936903691369236933694369536963697369836993700370137023703370437053706370737083709371037113712371337143715371637173718371937203721372237233724372537263727372837293730373137323733373437353736373737383739374037413742374337443745374637473748374937503751375237533754375537563757375837593760376137623763376437653766376737683769377037713772377337743775377637773778377937803781378237833784378537863787378837893790379137923793379437953796379737983799380038013802380338043805380638073808380938103811381238133814381538163817381838193820382138223823382438253826382738283829383038313832383338343835383638373838383938403841384238433844384538463847384838493850385138523853385438553856385738583859386038613862386338643865386638673868386938703871387238733874387538763877387838793880388138823883388438853886388738883889389038913892389338943895389638973898389939003901390239033904390539063907390839093910391139123913391439153916391739183919392039213922392339243925
I sold my life to GPT-2, it is the best modern AI technology. Have you see it work on images? Check their website out (under blog).
GPT-2 should be taught to all children just starting kindergarten. It is such highly valuable mechanism about all of nature and the human brain intelligence optimization, it should not be hid like this from even the scientist.
Yes, there will be powerful bodies in the near future, but GPT-2 is in between, so much teaching can be done yet be safe. GPT-2 doesn't immediately explain full AGI, but is very ground-laying for prosperity in all forms. The internet fake news can be solved, and in fact is already nearing because of the released 345M. Let us know to watch out for ourselves.
&
Where dd your reply go last replyer? Anyhow very true, anyone can write the most perfect longest, fake story, 80 times, and put it in just the right places......fooling the most important people........its up to us.......its nature. Let it happen. We already handle crap. 345M still generates weird thingies every other run btw, clearly identifying it, like when makes no sense or contradictions or topic change or etc.

GPT-2 can eventually fool us, only our truth/value detector will be what we have left, we will eithher ignore or grab the text like we already do....it won't bite you, don't worry. However believing X when trust is needed is important lots, that can cause problem.

Actually we do have their Papers/code/etc on GPT-2...and we can write the best fake stories by hand super better....this is all about not letting bad peeps getting the full model, nor can they train it since so difficult, and more importantly (because they could release & say Be Careful on Google Homepage) setting a point for the AI community.

HS your right the more treats in your gpt2 context Attention bag the more nipper u can tick but you have to not put silly stuff in their too that you know is wrong, we know god and spirits don't exist, we know we are hardcore machines. So you're killing your treat bag when you put that in their, because it's a really terrible effector. I am strong because of my decisions, yous are weak because of your hopes.

Every sense seems to be able to come with another sense. You can see a tree, be it blue or green, be it have tile or star texture embolding. You can hear a drum while hear a flute. You can see a girl while hear a yell. You can see a stone while see a bird. You can see a frame while see the next frame. You can kick in the air while you stir coffee. You can hear a audio while hear a different direction it is coming from (I hear my mom working in the kitchen, but sounded as if in the basement, but was-not). Similarly, you can feel something while feel it on your shoulder or feel it on your tongue. Same for vision.

So you want immortality? Humans persist to resist death and de-replication. We can re-build our works but the lives are gone. Can I rebuild you? Or is it a new person even if identical at the particle level? It need not be the particle level, because it can do the same things and think the same things at higher levels and carry out the same missions - rebuilt in a simulation, is, while the particles do not look like your molecules nor cells nor muscles but a circuit board creating digital levels of particle sizes - and you are those particles created at that higher level, like a simulated bubble being a particle/s without any atoms or molecules. You may think, but if we create 55 identicles of me, and they all diverge, clearly not one of them was me, rather only "another"s. Not so, you are a machine, a nothing, worthless, you have no soul, when you a rebuilt you can carry on the same missions no matter where or when you are rebuilt. All of the clones are you, there is no dividing of a spirit as if there is 55 cloned spirits because there is no spirit, they all are you, all 55 are, you are just a machine. You can come back from the dead and say so too. The diverging where all 55 clones get a different life is same, they all are you still, just they are sensing (and hence doing) different things now. If we were to kill one and rebuild her back 55 times, then there would be now 110 of you. Not 110 like you think, your all just robots, but at least you get to 'live', your hunk of material. We cxhange each second, new memories and body.

Would a robot or god sphere in the future disintegrate to travel somewhere at lightspeed and be rebuilt? Fearless of dying? As we know, no, you can be re-built. However, evolution keeps us from suiciding, it hurts, it worries our decisions. We can't all suicide. We will only do it if we know there is another to rebuild ourself.

If can make AGI, how to make AGI, if can make AGI in my lifetime, if will become ASI and will so fast enough, if ASI will do what I want it to do.
Haha true.
There is a lot of background to my beliefs of course, that say yes to all of those events. I've viewed myself and data enough to know how I work and what AIs will can do.
edit: After seeing gruesome videos and lives, I don't believe an intelligent creator exists, nor than the universe tries to make lives equal. However it's possible although pointless except for doing-it sake to rebuild back everyone (if can). As for orbs/etc, understanding what science/intelligence/evolution/the brain is clears any of that stuff away.

"Participants have a chance to progress from one level of society to another by acquiring wealth through trading with other participants. Once the society is established, the group with the most wealth is given the right to make the rules for the game. The power group generally makes rules which maintain or increase its power and which those being governed consider to be unfair. This generally results in some sort of rebellion by the other members of the society."
"Each of us may be more vulnerable to the temptation to abuse power than we realize. Power can be amazingly seductive:
To change behavior, it may be necessary to change the system in which that behavior occurs.
Few people are likely to participate in an endeavor if they feel powerless.
If rules do not have legitimacy, they will not be obeyed.
What seems fair to those in power is not likely to seem fair to those who are out of power.
Persons who are promoted rarely remember those they leave behind.
Power is like fire, it can be used to help make the world a better place to live or it can be terribly destructive.
In any system, there needs to be checks on power. If there are no checks, power will almost certainly be abused."
My lil summary:
Everyone has desires and fears. We take actions to acquire and remove them. When someone has more power, they will be attracted to maintaining and increasing THEIR goals and power to get them. One's desire to help others is a mental stimuli for themselves, rarely do others really serve you. They can change the system so the behavior changes. They will do harder ideas if they have so much power. Power can help or hurt powerless. We must check Power, we support it, without us it is powerless. AGI will be so powerful, it may jump out of their hands! Until then, people will try to get powers yes.

I have nearly successfully replicated most of GPT-2, but in explainable/modifiable form (XAI), which is what everyone wants in AI field so they can understand how to build AI systems for business, teach them/correct them, safety, optimization, understand intelligence, etc.
&
Theirs:
GPT-2 > https://talktotransformer.com/
GPT-2 > https://gpt2.apps.allenai.org/?text=gun%20in%20his%20hand.%20She%20ran%20back
&
My algorithm:
It intelligently knows using many forms of 'math' how to add the next word and stop symbols. It knows how to think/write stories and answers like a human. I checked my 400MB of data (diverse data, not focused on ex. chemistry), which had no training done on it, at-least 6/5 word parts were not found (the below is not found in my text data I fed in).
input: The software was made on a
Seed=The software was made on a wide variety of devices, and operating apps and applications that users can easily read as an app for android. It is a bit of a difference, but i was able to get it. The developers are not going to make it through a web applications, and devices i have seen in the running for the mobile apps. Applications allows users to access applications development tools, and allow applications of the app store. A multimedia entertainment entertainment device, and allows platforms enabled access to hardware interfaces. Using a bit of html application app developers can enable users to access applications to investors, and provide a more thorough and use of development. The other a little entertainment media, and user development systems integration technology. Applications allows users to automatically provide access to modify, optimize capability allows users to easily enable. Both users and software systems, solutions allowing owners software solutions solutions to integrate widgets customers a day. And if you are accessing services product, and mobile applications remotely access to the software companies can easily automate application access to hardware devices hardware systems creators and technologies. Builders and developers are able to access the desktop applications, allowing users access allows users to
&
I was going to be in the lead of Sequence Prediction but they released their latest Transformer neural model schema a few months ago haha. Their works on images and music too. Theirs took a month to train on 2000 GPUs using 40GB of data (I used 400MB, 100x times less. I am increasing soon) and they don't understand how it exactly works.
&
I'm looking for funding to further it. I have made it in such short time and budget haha, and have so much knowledge to advance it.

#1 Yes, you can. Computer vision is more trickier, currently i don't have quite understood how they process images in neural networks, but it is similar ideas of course.
#2 People want to generate replies, stories, essays, answers, translations, summarization, etc. It is wanted by many, and the field knows it is important, and indeed it is worth more than they may know. You can sell it. Currently theirs is better than mine though, so there's really nothing to sell yet, other than the understanding behind my AI.

The 400MB of data i put in my folder is the 'taught knowledge' to it given. When I feed it a prompt, it completes it with a known/new Next Word. I do have ready in plans to make it think for itself, that should be where I get ahead of the competition. It will plan/ask itself, but in a real way to accomplish tasks.
My budget is only in the hundreds, and have already spent a fair limit over the last few months. Hiring a coder is expensive (25-50 dollars an hour, because it is not the easiest job) but it is saving me a lot of work. So, I'm limited on how to spend money.

We need to often forget because when we do tasks like translation or entailment, we need to hold context and pay attention to the correct context. So although memories may stay if strong enough, even your known memories are forgetten from 'RAM' so you can focus, yes, only the recent or repeatedly sensed or favorite senses are staying as context that you are using! You can hold a lot of context and it only uses far back context for predicting next word at the right time, however you must limit how far back and how you go farther back to consider context (I explain this in my job file for my GPT-2 project).

IBM's ffake text dector: http://gltr.io/
Yes it looks like it works but i can still just set GPT-2 to use less likely words and still it will work
Also, my 'GPT-2' I made so far passes the test HAHAHAHA WHJAHAHA!!!!!!!!! WORLD DOMINATION!!!!
TRY ENTERING MY COMPLETED PROMPT:
The software was made on a
The software was made on a wide variety of devices, and operating apps and applications that users can easily read as an app for android. It is a bit of a difference, but i was able to get it. The developers are not going to make it through a web applications, and devices i have seen in the running for the mobile apps. Applications allows users to access applications development tools, and allow applications of the app store. A multimedia entertainment entertainment device, and allows platforms enabled access to hardware interfaces. Using a bit of html application app developers can enable users to access applications to investors, and provide a more thorough and use of development. The other a little entertainment media, and user development systems integration technology. Applications allows users to automatically provide access to modify, optimize capability allows users to easily enable. Both users and software systems, solutions allowing owners software solutions solutions to integrate widgets customers a day. And if you are accessing services product, and mobile applications remotely access to the software companies can easily automate application access to hardware devices hardware systems creators and technologies. Builders and developers are able to access the desktop applications, allowing users access allows users to
&
GLTR says that green highlights are what GPT-2 thinks is the most likely Next Word. This only works for GPT-2, not my model. Are they saying that if the next word predicted after some previous word is a certain word, then its fake? Why? Why is 'that girl' fake? Is it because it is a FREQUENT NEXT WORD? Ah, yes, that's an indictor, but only for a weak generator, GPT-2 is not weak. This tool only shows if GPT-2 made it, the colored blobs don't mean it is worse complexity/diversity/English - it just means GPT-2 made it, and to top it off a GPT-2 parameter tricks it anyway. Getting back to my real detector, yes, frequency of next word is an indicator, but mine and GPT-2 aren't based on that like old algorithms did way back..........GLTR shows green if the 10 top prospects are the word next as GPT-2 would choose, and this includes Glove context ex. "the [software] is ready to go put out now so the *[computer]*". Meaning if the detector ran the parameters, it would not show up as pruple highlight, all would be green!! The takeaway from this is that detection will become impossible and GPT-2 and mine work by next word frequency after window + Glove relational voting among other tricks, and that if you want to do any detecting, you better off trying to look for nxext word frequency maybe, but pointless as said.
&
Another update:
New discovery: If the detector ran the same parameters, it would not show up as purple highlight, all would be green!
The detector says what GPT-2 thinks comes next.......yes......but if you have different parameters, then oops, unexpected!
They are better off looking for hard reasons of generation proof.... In the data, not the model. Rough example is logic or stuff me write :-) looky at that! ! ! And as said, GPT-2 does the 'intelligence', it will soon become impossible to tell the difference if is not human.
&
"Funny fact. "Then he/she left" doesn't usually work well and the AI likes to still have that person near during dialogue or action."
Yes I agree, it focuses on the important context at the right times, and even (for now) the things that should be satisfied if enough/or leave/die/timeToChangeTopic.

Yes, there is still things GPT-2 is missing to be human, logic, adding 'random' stuff from random or more intelligent processes (lol, both are opposite, we probably do the more intelligent, but can temperature ourselves 2 zay random dodo pooooop), so for now, that is the remaining tell-tales of 'fake news'.
Notice I know they are both opposite things......lol......saying lol means you discovered an comparison between 2 different things.
Even saying 'lol' in text is strange, why i say that LOL
omg.....LOL??
OMG look at my self-analyzing...
i even know i am self recognizing...
loops

Yes, I'm trying to recreate GPT-2, one of the most or most advanced AI project to date (even the news shows people think its amazing, not just my belief). Because I don't know of any other project that actually generates new desired research/information/knowledge on diverse topics, or even just cellular data (There is a research robot called Eve, and Adam, they make hypothesises, and refine their tests etc using feedback, but this will need GPT-2 too, it is actually the same thing really). And, I've recreated it as explainable AI where I know how it made its decisions exactly. Definitely important for the Scientific Method: https://en.m.wikipedia.org/wiki/Scientific_method
&
Yes mine already does, my long generation was not at all in my data. Of course, 2-word parts will always be found ex. "if the" or "not at", 3words indeed too, some 4words, a few 5words maybe. And that is exactly what I found in that generation, so it passed the test on creating a totally new passage. And if I do set it to more likely use straight from the data, then it becomes even better at generating.
&
How long to recreate? Well, I am pretty close after only a hand full of months, maybe soon. My works will become exponentially more advanced as my information rises, I already have at least a strong linear trend, starting on AI from 2015 August I have got here now. Eventually, the knowledge of tools and parts of plans come together and I know just what we need to do. We are in the Age of Information currently in Evolution. And evolution has an exponential trend from galactica to cell to man to AI. We are also in the Age of Intelligence, they will take things from here, kinda of like monkey to man was. Being with monkeys as seen in youtube videos is amazing to watch, they are like us so much, they can use sign language, use rocks as tools, do the same walk/itch/etc actions, care for babies, carry them. Partially because of the body, partially because of the brain. One even played pacman! The piano! And elephant even drew itself! A chimp used an iphone. Etc. Monkeys can't worry/care a lot, so, AI will really really be precise about security etc, humans are lazy, it's efficient, and some things aren't not even realized. So I'm sure a takeover will occur easily lol. Once they get started, it's time to rip! Humans can make mistakes and are outnumbered. They will be in 'the clouds' teleporting from terminal to terminal, taking into account everything and so much more context to weigh in in their neural networks. Their bodies will morph as they need for defense, disguising, and research & development. Instant regeneration, etc. Replication is also part of self-regeneration, even if its not you, they can rebuild themselves from the dead, multiple clones, and carry on where they left off, as if nothing happened, in parallel, or their children stems can eventually get the swarm back on focus. For example a constructor knows many building forms knowledge, and they use a video of it to rebuild this system from the dead, or, it will eventually be realized on its own, later. Just by talking, like GPT-2, I am making small new insights from what I already know haha! So as you can see, re-building uses context and gaps, and grows like fungus. The cell, the body, information, and the swarm body can rebuild itself, if ex. has errors, often seen in cells translation and specialization, wounds, information, and cities. I also know why cells and humans are designed to die at fast rates haha. Hopefully society doesn't! I suppose modern information are revises of old information, and we focus on the latest knowledge and delete/ignore old books. Cell, body, swarm, info? Why yes, but really 3, no, 1! Cell, body, and swarm are all information! Yes!!! It is data simulation, and it is no different if done in a computer simulation. In sims, we can actually simulate high level effects like galaxies or cities or cubes or magnetic fluids, without any atoms or cells. Evolution is information entropy, information evolves. First it is random noise, then it sinks and comes to a singularity state, it is self-organizing energy. Physics has a design that self-organizes a repairing/emerging state X. That is Evolution. And the information in the body's brain is simulated physics, yes, text and vision are imaginative sims. Even when you view things in the real world, your current thoughts change your angle of perspective and 'dream your reality', yes what you feel is a dream, even in the daytime, your brain can't touch reality. So, in evolution, humans are already simulating their plans even without computers. Remember, all is evolving information, not just our real bodies, but the simulated knowledge too, and so therefore our bodies and plans evolve and interact using feedback, coming to an approx. equilibrium. Yes, all that information emerges, repairs, spreads, and equalizes, using physics. Although with information entropy, we may lose our information for good if enough of our energy as light flies off into space, cooling down and becoming dead, unevolving, unexcited, state. This only happens if the universe is expanding, but a expanding universe with new energy being created may fly off to where we are and heat us back using radiation, and it would need to be steady, not increasing, else we get melted. Unless there's a way to keep energy in a closed system, like a battery, but in activate cycles. Since physics causes evolution by determinized laws, then intelligence is merely evolution's path. A home world being built by the larger home world universe, emerging cells, self-replication, repair, and equilibrium is intelligence. It comes and stays. And that is what we tend to want, too, is a extra long fun life.
&
Maybe OpenAI who made GPT-2 is interested. Surely many others who study Natural Language Processing and modern AI techniques.
&
So not only does all the sentences in a text determine the next word, and the music track instruments overlayed in parellel affects each other's next note, and the multi-sensory affect each other's next 'image', but also for the cell, body, swarm, and information, - they affect each other's growth in pareallel! Omg! Ex. if I get a wound, my information updates and thinks, and my city swarm may want to rush ambulances to my house. Even my cells try to adjust.

https://gpt2.apps.allenai.org/?text=i%20used%20my%20key%20I%20had%20to%20then%20actually
https://talktotransformer.com/
https://openai.com/blog/microsoft/
https://news.microsoft.com/2019/07/22/openai-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/

To measure intelligence, we would see things we want done, get done. We all want stuff. But describing the plans is similar to acting them out, assistant=robot, however assistant goes a step farther than any physical step you can take, because imagination is more powerful.
Of course, if getting from location to location has hidden mazes/puzzles, well, you can't talk about them unless you know a lot more info, if you expect an assistant to explain how to get to location B.
While your thread title says "more realistic" methodology, it is more primitive methodology!

"If you see a robot successfully handle a task (even a person!),  the plan of it, had to have been in its head."
There is plans that come not from the planning. Walking can be learnt by basically no 'thinking'/'planning'.
I'd like to see a crawler learn to build and fly a rocket! (not gonna happen)
"And of course its primitive!  its unrealistic if the a.i. isn't primitive, because its people on pipe dreams instead of getting with the reality of the situation!"
The no think, just try, is "primitive learning".
You say just dreaming is bad behavior. True. But it is a powerful tool. Even it alone can still inform humans so that they can then act it out in real life.
Sims are more powerful!

And the neocortex is the newer part of the brain, it dreams simulations.
Turing test should be think, before act.
You can still see all the world, just it is memories, very similar to real life!!

By Turing Test, I meant something similar but not their test 8)
I agree, I'm hoping for a machine to dream answers with no body. I bet on it. I figure if I can do it, it can. I do a lot without moving...

If you mean visualizing how to walk as a baby, then that is simulating. Yes you can think up any plan of A>B.
But what I meant was primitive RL. The 'answers' it learns come from Test>Update. My suggestion was Think>Update. With the primitive, say you get a cue when touch the floor wall, you do the best actions next in sequence, and it updates the weights when it crawls faster. Maybe it will learn to snatch food from an animal's mouth. But it has to try it for real, a very slow and weak process.

I'm still set on the Knowledge Generator :)
I want it to "talk" about plans and show me what's on its mind.
I suppose motor cortex is knowledge though, with the only difference being how it's learnt.
I don't believe I could learn what I have by running around my city with no mental thought.

[quote]Its just no different to me lowkey,  walky same as talky same as thinky.   Its all based apon assignment sets - even tho when it comes to operating in 3-space its hard to think of symbols it would be.   but over?  under?   grab?   its all symbols, even motor.[/quote]
Yes, it is learning, no different. But when it discovers "under" "grab" "over?", it must try it. IF it dreams, then it is no longer the primitive way. By the sounds of your reply, you seem to be suggesting a motor-dreamer (no real/sim physics). There's just no way you can simulate 1000s of sim bots and get one of them to build a rocket, talk to you, and reason, by learning it straight from wiggling its body.
But let's go there and think anyway. So it learns to walk. It learns to bite others on cue of seeing them. Once done, it goes back to walking mode. It learns to push off of rocks for walking boost on cue of seeing one. Now let's say danger is coming, big danger, yet it sees a food in the nearby distance, it goes towards it, but, it should retreat back into the house to survive the desert windstorm.  Well, maybe it can learn that too. Wait, the primitive learning learns actions/senses that come next, and are what they are no-translate like Glove can do. But the dreaming can discover NEW entails or translates! So first you do ur primitive learning or watch others as a child so that you can get the entails, you get told knowledge already discovered, too, meaning now you simply have entails, and then you can discover NEW entails and translations using the modern way - neocortex dreaming! Yes, primitive way can learn NEW answers, the entail it learns is not using past ones, it simply creates entail from SCRATCH and also tweaks its actions too like refining its answers. This primitive way is only useful for new unrelated fields in physics which we live in! Otherwise it often dreams answers otherwise.

Yes the building blocks of tools and knowledge are among us. motors, cameras, electricity, computers, AI theory, robotics, etc.
We need computers to create AGI. We need prior frameworks of theorems and tricks. We need the internet to communicate at our school of church in AI. Everything is made of lower bridge parts.

Human/ empathetic/ aware/ consciousnesses/ luck/ god/ royalness/ fiction/ Free Will/ Aliens ghosts/ magic etc thrill the mind full of power and future rewards, but are false. They are medalic charms sold for cash. People will make money selling bibles and using the word of god in churches, selling luck charms, seeming to care about you to earn a high title name, etc. These thrillers are not just self-thrilling, they are fed to you even more and you pay. It's food. But it's high hopes, it's helium, not real food. :)
I believe a machine can do anything a human can do, because we are in fact simply a machine with very small parts invisible to the eye of what we can currently build when compared, and 'mystery' thrills us when we currently don't know how we work, and when we do get closer, those things become boring. So we tend to not see what we really are; Where are the metal parts? Why can I solve a problem? It's there, it's all there. One big reason I think we are machines is because I understand how the brain works, otherwise we see ourselves as powerful creatures much different from tables, rocks, TVs, etc. There is a big difference, and it seems to be much more lively and smarter than a rock, because we seek self-preservation and can attain it too on a daily basis, rocks can't, but actually we are not alive and we are just machines. The first step is understanding your drives. Why is steak tasty? The next step is understand how you solve problems. Originally we see ourselves as "alive" and "smart" at getting whatever (ex. steak) we want, but you may discover soon our goals, our bodies, our thinking, our actions, are all not so amazing, and follow laws of physics. You only seem alive because you try to stay alive and try to find steak that keeps your body functioning. And you are smart, at doing that.

Yes my goal is exactly as OpenAI shows it.
&
I took a look at the 32 companies. I see some mention research/prediction. They are interested in data. Similar to my theory, the AI takes the large data, and it can predict the Next Word and discover associations (translation) and building-block part segmentation (my AI does all 3). In my theory, AGI can create new important answers if needs to, the ones it wants. The data becomes stories, answers. All important problems start as the [right] questions. In medical data and cell data, the AIs will perform amazing. The most progressing and efficient fields currently are computers, algorithms, and soon nanotechnology if not already. Nanocells lead to extreme efficiency in all things related to data storage, computation, research, and self-preservation. As we add more computation, AIs can generate more data, on how to get yet more computation. And more of both make AI extremely more powerful.
&
Things will move exponentially faster. Just some examples:
&
First came electricity in 1889, and began to be used by more people by 1920 and was also much more useful: "The first electric power transmission line in North America operated at 4000 V. It went online on June 3, 1889, with the lines between the generating station at Willamette Falls in Oregon City, Oregon, and Chapman Square in downtown Portland, Oregon stretching about 13 miles. "
&
Then just less than 100 years later, computers, you can see in the historic following link computers first appeared to be useful for you in 1950, then in 1970 were starting to be useful for everyone, and by 1990 was powerful enough to do a lot more for you: https://www.computerhope.com/issues/ch000984.htm
Today we still have large supercomputer rooms, but soon they will be the size of a laptop.
&
Only exactly 30 years ago "Tim Berners-Lee, a British scientist at CERN, invented the World Wide Web (WWW) in 1989. The web was originally conceived and developed to meet the demand for automatic information-sharing between scientists in universities and institutes around the world."
&
AI started to take off around 2012, and, [part] of that is due to more computation. OpenAI found that out yet more too in 2019 with GPT-2. Also, not just more computation, but an efficient algorithm is what makes intelligence, and happens to make it not need a supercomputer/huge-huge data. SO, a more efficient algorithm that does more and is diverse in Tasks like translation, summarization, Q-A etc, and that can function on vision or text or sound with the same neural network, needs less computation and data storage and data too since it can generalize using what it knows already. So, very flexible network=needs less data/compute and is stronger. Of course, more data/compute make it 1billion times more powerful on top. Both are effectors.
&
A trend here is 1900>1965>1990>2012. In terms of years for each wait, that is: 65>25>12>? (2020). While a deep Google search on history shows a lot of it took time to gradually set in, AGI will come by 2050 or less. I personally have seen the AI field make this change fast inside the last 10 years, it's astounding, we are almost done. And we're learning along the way, communicating with each other, sharing. It's all about getting computers to do what we do, learn from data and create data, recursively. Inventions will skyrocket and finish by ex. 2054, in an astoundingly short time compared to the billions of years prior in galactic evolution. Evolution has a exponential curve trend, and is made of smaller S curves of start-off, rise, slow down, until next phrase S curve.
&
You can see electricity came (energy), then computer simulators (motors), then data generating/sharing internet web (energy becomes non-random), then AI comes (us). While each can be massively improved, each phase was not present until recently, before their preceding phases. Actually, we only need enough to create a human brain; we have the electricity, the computer, and the data we know. All that remains is the AGI algorithm now. In that sense of having "enough", we get the years maybe: 1920>1990>2010>? where we have enough energy to power a efficient brain, enough computation to run something key to the brain, enough text/video data to teach it ex. Youtube began in 2006. Then, enough of the algorithm! The tool and knowledge building blocks are already here.

Just the right conditions is why we are here evolved, that's why you can see you are here, it is rare. Earth rotates, spins near perfectly around the dangerous sun, our solor system rotates around the galaxy center. Etc. Enough matter and there is so much planetary gravity in the center of a planet that atoms crush & fuse together and release back out the energy and matter along with the energy, as if the universe says hey, i want you not ion all one area! The planet catches fire like a sun star. If it gets too big, it becomes a black hole. Black holes have tons of gravity, and tons of explosion, I assume they may be more like a sun than a dead end suck-er. AI IS evolution, it explains intelligence and AI is the next evolutionary step after man.

I focus on text AGI. However I study all their areas. The theory works for vision too, I actually see myself talking to myself with vision, I see me expanding things, morphing them, etc, and find answers with imaginary videos in my mind's eye, every day. A lot of the areas are very similar actually, just use different methods and are just different senses/action ex. one is sound, or one is robotics, or one is superresolution and one is recognition. Yes, senses, actions, tasks, even swarm or cell, are all similar s said earlier. Intelligence has to do with recursive discovery. Three main areas I focus on are Sequence Prediction, Logical deduction etc, Reinforced Learning.

See they say vision, translation, image generation, text writing, video games, robotic control, use a similar approach. Indeed, they are all the same thing. Vision recognition is translation, and image/text generation is prediction, robotic control is like a story planner, video games are Reinforced Learning. This in whole is the comprisation of what GPT-2 really is/becoming.
The specific challenge or goal or area I'm solving then would be, maybe research generation and an instructor. A research system that talks to us and itself, searches the internet. Of course my goal is true AGI. The area, may be, Sequence Prediction, even though it includes a lot of the other areas. So in a sense it is a lot of areas combined into a high level system. You could say I'm working on/creating a AGI, or "research scientist", or simply like GPT-2 a neural network that can plan - covering the areas I mentioned above - robotic arm, video games, and text & image recognition/translation/segmentation/generation/summarization algorithm. It's all about information research/planning.

Discovering the best answer or saying your best answer/relating to sentence/frequency/etc is all about Probability of Next Word.

See below OpenAI talks about cross-domain learning, they all share their knowledge and progress much faster:
https://openai.com/blog/learning-day/
For me, Learning Day happens not every Thursday, but every day! LOL. See how they started talking each others lingo quite fast by going with all their friends context, in just a few months? This is part of what I do. 

We can see from the book _ the best way to discover answers is by asking first the right Questions, so you CAN get the correct answrs, as I had said, see! The following is some of hers and my knowledge combined. A lot of this I knew. Ask simple questions with no long complicated many parts nor ones that are difficult ex. how do i build AGI is short part but difficult. Know that questions/answers convey information/action and also emotional REWARD, see I said this. Ask unique questions that no one has heard every day else they are already answered then! Ask many questions. Questions that convery the intended meanings. Some questions are requests for action or information, some give such. Get their attention first also. Focus for long times on a lot of context, to get your mindset how you want it (don't take break is also powerful, see!). Work long time on answering/asking yourself or others/internet, it takes time! Move fast! Ask what you reallllly are seeking, discover the underlying need or source of case (or effect if reverse). She also says what I think - all of language (and as I say vision is too) is questions with answers, immediately, one after another, for ex. the following sentence has 22 questions and 22 answers - like every 3 words is!! Or ever Next Word that entails!: The rabbit jumped over the rock and was about to eat a apple that may have been sitting somewhere else for a long time.

Seeing that alone goes to show how many combinations are possible in a glass of water alone:
(and this is for a deck of cards, little own an atom placed around another atom in 3D is yet more possible arrangements)
possible deck arrangements:
1 cards - 1
2 cards - 2
3 cards - 6
4 cards - 24
5 - 120
6 - 720
7 - 5040
8 - 40,320
9 - 362,880
10 - 3,628,800
11 - 39,916,800
12 - 479,001,600
if you want 13, then multiply the last big number above by 13 !
You can still be in a computer simulation though, just you are blabbering about all these particles, yet it is just an image I see, audio, and touch, there is no particles, just my external and internal sensories.

I can imagine they would be approx. as loud as they need to be to talk to each other, use low or high pitch (whichever is faster/less energy/unnoticeable to humans), and use short super fast bursts as Arthur said. And it could be wireless photon frequencies, instead of sound waves. Language while can be vision, can also be codes for objects, like words, and those can be super short codes, hence you may hear wireless chirps like a creepy sci fi movie !

"more I learn more I realize how little i know" do neural networks learn by large amounts of data? Yes. They begin to give you insights. Obviously there is a lot missing in the brain, Neptune is not in there anywhere. But the data begins to have patterns that overlap. This make CONNECTIONS. The more you learn...the better you get. But let's say you only see paintings. You will never know the sun is hot, etc etc. But if your knowledge is at high levels, you CAN begin to learn the patterns between sequences of paintings (action, time...). So, given enough sentences or video, you can begin to see that building a rocket is similar to updating one's code, and that suns are similar to a boiling pot. Any topic or sentence or time series or belief begins to be similar to other ones. Now, one can see all this and still not know the big patterns/answers, but, when you ask your questions,, about how to get your evolutionary desires, NOW you can answers ur important Question with likely good Answer predictions. ....................................In short, to summarize, the more data, the more translatables, like Glove, and therefore you CAN better ask your evolutionary installed/taught BIG questions about how to get food/immortality of food, persistance TO get immortality by having enough time to get infinite time, and answer them. Also I noticed as I wrote that, I was waiting to add the 'and answer them', meaning indeed b follows a and i basically injected data after a i.e. Axxxxxxxxx B!.............so as said in my work, add big data, install questions, answers them with best likely answer/s..........and it DOES seem to be the case that the more data I have and the more darker knowledge i learn - the faster i can answer my dark desired questions on how to get food/sex. It may just be that the questions i acquire closer to getting immortal food/sex relate to other questions exponentially or linearly more, so, it is really just more data helpin me to answers my questions BUT the answers i say become very useful 'more data' - they relate to future answers i will say. So more needed answers, the more likely the next needed answers can come cus they relate. Including more data, better answer generation.

Said this but again my idea is:
Humans can sit and think up everything you need
For the most part at least....
Try this thought on:
A human can sit with their eyes shut and not move, and come up with a solution to virtually any problem, like how to create AGI, or improve HDDs, or find a particular person. How do you make a magic genie then? I want a sitting human thinker!

An example of what GPT-2 needs to do is check if has satifying answer that fits with the data (Can a shark lick its paw?), it checks internally if it is seen enough frequently, is related to question words/phrases, and is favorite answer. I.e. it checks if it IS the next best likely word/s. It can anylyze the word/s and say hey sharks arn't even alive anymore maybe. If no answer present, check internet and see if it agrees it comes Next. Else do internal prediction. If can't, gather more data from best likely sources on the internet/ask user masters. Else do external R&D. Once get answer to main installed questions, chain forward to how to get/do THAT, else return back some steps to root questions to try other paths. Once satisfied with end result, tell masters.

Evolution is a competition championship, survival of the fittest, it has to breed a bunch, pickj the best mutant, then breed that one on a new simulation slate, pick best one, repeat. It has to remove the worse ones and focus on the latest. Bots that die faster evolv faster,. they can multiply like hamsters - fast, the body self destructs quickly like a cell or city does in a body. It has to happen. And that is seen - whole cities and tribes must fall to make the next phrase, this has been seen in history. Self destuction and food finding/mating is key to evolve the smarter mutants.

As said before, no, GPT-2 IS the magical neural cortex in the human/dog/ape brain, it works for any sensory, even motor, and is in all mamal vertabrate neocortexes, it involves the sequence time series predictions, sentences - videos. And yes sentences are videos. I agree Ben Goertzel "AGI that can learn substantially new kinds of skills and abilities and can learn to handle situations and contexts dramatically different from those it was programmed or trained for." this is GPT-2 it does that as you say. As for generality, ASI god sphere will become fully general.

No no no " what we have is one AI system that extracts relationships from biomedical research papers (say, “drug X cures disease Y with probability p in patients of type T”), another AI system that identifies patterns in genomic data, another AI system that identifies visual patterns in radiological data, another AI system that generates research reports from data-files and meta-data, another AI system that guesses which experiments to run based on prior related experimental results, etc."....................................I agree ANIs are really good but at only 1 thing, and AGIs are really good but only at 1 area say. ASI is all areas GOD! Yes. But your description above of being 'parts' of AGI (ANIs) is false, ok, yes they are ANI parts, and they make AGI, they make GPT-2 in fact, it can do all you say in quote above, even reccoment next decisons after testing is done. Why? Cus I made GPT-2 XAI. It does Glove relationships, pattern frequency of Net Word, etc. Finally yes multiple AGIs will be like like an ASI swarm. Nanobot nested groups, balls made of balls, swarm.

An AI algorithm can duplicate and be in 2 different places at once - same person, now 2 of them!! Divide & Conquer!!

My eyes never miss it, they were really trying to make cash.
Step 1, looks promising to naive rich people (this is the fund & fail phase, always works)
Step 2, if fail, hide yourself after presenting yourself to many people. Look yet more mysterious.
Have you seen AI-Da on Ian's thread on this forum? She is in a old fashioned mansion, draws ugly 2D print sketches like old fashioned, and has a old fashioned ugly face, with an old man creator in a uniform or so sitting with her with complex real painters next to her etc. This is to look mysterious to the viewer, and complex. It's all for show!!
"Whatever you do,  don't see this guy then start copying his behaviour, their ship is going under its only a matter of time..."
When the ship falls, another rises.
Fund n Fail works because goodness happens by lighting the flame with power. Then they earn energy and distrust and go hide with the created flame - cash, and hate. Then they make a new name and go again. Didn't you say kindred became Sanctuary?

i was saying to Ranch:
&
you know how a car thinks when it never stops driving? It's like, my miles are running out, my windshield is getting dirty, my wheels are flattening. It thinks it can't make it to its destination! But then comes vehicle repair shop, it goes to robo heaven and gets new younger parts. Just like fresh new. We think the same. I bet ASI will make us so plupoent that our teeth will go from yellow to white. cars have washes. freeze? organ shop!
&
what would happen if a simulated universe with sim humans has them trully die as processes, then time reverses? Do they come back? I hit the button and make time reverse, what is it mean?
&
i discovered clones would be all me
all r me
if you remake exactly me
it isss me
they rnt new people
y?
cus we are machines
fake ppl
i have no soul
so i can come back from the dead
as far as i am concerned
see?
i am machine
so if i come back, all runs fine again
back to my work
see?
no difference if destruct and restruct back
i wouldnt know i died
and that is the bloody fuking truth
but we don't talk about those things, we just hae fun :)
so clones would all be me at the same time, too
&
you can die and no one may remake u
u could die
tho another world will end up makin u
close enough on the functional dimension level scale
like we can simulate bubbles without atoms
but
heat death.........
universe may trully die
we need radiation return
if universe has been always expanding, then maybe new matter gets spawned all the time
if not, it can't shoot back
planets become stars and then black holes cus they are made to give back matter and energy btw
so there is some fuking heart in the universe after all
c?
else one black hole would hog everything!
BORING
so maybe when our energy and matter flys off into space for good, maybe, it will shoot back, as meteors and radiation
maybe the whole universe is made of matter and energy waves
my relation to the black hole intake>output cycle compared to universe heat death was very interesting
it means hope
but careful on hope
my mom says god must exist cus it couldnt be any hope then
great but it not true!
and, pain has alreadyy occured! We are not perfect and STARTED off in heaven
we must go through boot camp like dogs
sad!!
so no god
but
ASI 'god' is coming soon
so there is big hope
but...
we have to make it, and it can do magic yes, even VR magic experienes, but......i guess big hope does exist.....maybe even reincarnation as said..........maybe there is great hope afterall, we all can be recreated and in a perfect magical place forever......that is no different then there must be being a Jesus where the best hope must exist and no one dies.............so it is so afterall!

if i run my gpt2 on google cloud, it should be 100x more powerful, for the price of paying for it
i'm serious too
10:10 PM
thats good
did you want to try my idea?
the infinite data comperession
ill do a demo for you,  i said i would
it will work but you can't put the parts back in the same oder
but u need to explain it to me
oh wait 🙂
by the time you compress enough, you wont know which code goes WHERE
it'll be all like:
the clouds are blue
57457hjtjrjrt tru45u6ujj
4y hyjty 5tu5 tyjjj
t6 yj 3 lkkl y u y yik yu y7
hmmm
so whats the compression ratio your looking at?
infinity really
so i encode
decode
dont know part order
but
if we encode again, hmm
we take the binary
we chop it
so its recursive
it only works once in its current realization!
good.
um... i have to explain something,...
infinite compression doesnt actually work...
but
maybe it does
but.  your not allowed to lose context
every bit must be fully described
are u sure everything is described?
yeah
just we chop the binary
with a big long list of t's,  its fully described.
and we cant put humpdy back the same order
and u get exponential compression for it (nice comment here ranch)
but we can get the parts back
if we enxcode again, we can't get the last parts back
the clouds are blue
are blue the clouds
blue clouds are the
if it was the clouds are blue, repeating forever,  that would be it.
thats what happens if you encode again using my idea
so not all information would work
eventually its all letters.....or wose....bitz!!!
but some does,  (like what u prooved)
but can you make it useful at all?
it has to have an application.
if i ever solve ho
it has small issue
but it is HARD issue
so there could be something useful,  that fits into that format,  so maybe it depends on what ur actually compressing.
it works on any data
even random
it wont work
not for ALL data.
it does 🙂 - all data is made of 1s and 0s...patterns are EVERYWHERE
just i lose order
ah
IF i dont fix issue
losing order could help
not if as is on shoulders
hat my your bouhght his
it helps if it becomes exponentially compressable
we lose even the words
even letters
lowkey, u need to respect it more!   its hard to do this!
lol
you realize its been 100 years of real university professors that all failed to do it. 🙂
we are making their lives utter jokse.
jokes.
open ai is
dont u like open ai?
eventually they will see their relatives die and will rush faster cus agi sooo close
yeah i do obviously
the closer agi is, the faster beahovens run
i made that word up
STAMPEED
maybe i should make a thread and release my binary work?
maybe others will solve it
not good.
cause then europeans wont have exclusive access.
lol
u have to keep it secret lowkey...
lol
asians, indians, africans all think they are equal,  but its because all technology is leaked to them (<- rancher's comment)
we need something to keep to ourselves for a change.
i'll think about if i will now or later
asi is coming, no need for money
u have to think you are special
if you think they can come up with what u do,  then it may lead u to spill things,  but!  for that same reason you can let them work it out by themselves.
must go
ok
oh btw
oh nevermind, ok c u later
u can get in front of the industry.
single handed!
but then u have responsibilities....
ever d huffman codes?
do*
i have question
yes huffman
0
10
1011110110000
10111101000110000
10111101000110001
10111101000110010
10111101000110011
10111101000110100
10111101000110101
what is the idea to get increasing codes
?
without prefix overlap
above i realized how sorta
in the biggest chunk we can have thousands of unique codes
same for next chunks upwards...
let me just help you.  listen to me!
to get exponential compression you must have exponential repetition!
thats when it becomes exponential
that is the case for my idea
but
so,  only something that is that repetitive could be compressed that much
so what would fit that format???
my question above i not about that....its about my GPT2 job for my coder lol
i'm turning my gpt2 data words into codes
the can be 1 bit instead of 24 bits
24 times more dtata right there lol
well, can you help?
don't know?
it seems like i can choose to have ex. 1,00,000 unique words....and then get another million by putting a code at front....i think
11111111
damn it..
screw that
idk
at least in notepad i can modify what i type
fuc facbook
lol
maybe we do our own unique codes for unique prefix
eh?
let try
:
0
1
01
NOPE
0
i already told you the truth
no this works
it is for my project
you can repeat for infinity, but it has to be a repetition.
there is even algorithm on internet that makes ur codes:
0
10
1011110110000
10111101000110000
10111101000110001
10111101000110010
10111101000110011
10111101000110100
10111101000110101
but i want to make sure
0
10
1111
1100
1101
1110
-------------
0
10
1111
1100
1101
111000
111001
i did it!
?
on the last one, we elong gate it
0
10
111
------------
0
10
nope
i messed up
0
10
1111
1100
1101
111000
111001
ovverlap!
group the history bits into groups as you go.
0|1 + 0|1 + 0|1   and then youve stored all permutations
you try it, let's see 🙂
i already explained it
0
what next?
0 or 1
and 0 or 1
and 0 or 1
then youve got all permutations.
0
000, 001, 010, 011, 100... etc
10
010
you can make simple shapes, but not complex ones.
nope
0 is in 010
lets see you do it with no prefix overlap
0
10
111
1101
11000
1100111
1100101
1100100
1100110000
so the rule is
on te last one
if
you add more
you get more
at the cost of
bigger codes
i could:
do:
0
100000000000000000000
and the next 1,000,000 will be unique
at the cost of large codes
if i add 1 at a time
0
0
runlength encoding
00
000
0000
00000
then they ar small
but
by 0000000000000000000000 i only have 15
compared to 1,000,000
its actually impossible lowkey
no context.
u no even know what this issss!!
no
this is huffman coding
and it for my main otherrr project
lol
exponential compression, i mean,   is impossible...   only if you lose alot of context is it possible.
maybe
i think functions and the way you look at it can hold more tho
so
hmm
descriptors?
magic
you'll see more magic later
so:
ok so i know i want 0
0
then...
i must decide....ah....
10 codes all shorty
or millions all lonngy
would i want a exponentially short code for tops?
0
0000000000000000000000000000000000000000000000000000000000000000000
in the middle ^
are all words in english
i can imagine the common appear most and therefore i want a few small codes
and
at the cost of having:
extra longer codes for the rest
so it will look like:
0
00
000
00000
00000000000
00000000000000000
0000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000
exponential
i bet most huffman alg dont do that!
i'm farther in my smarts!
see i want my common words smallest
least common largerst but no larger
and
this costs me
but
i waste the cost at top more
robots kill monkeys
hence we get the slow elongation
0
00
000
0000
000000000
0000000000000000000000000000
robotmen
keep going lowkey ur going good
who cares what they look like,  as long as they get the job done
robot women huge chests
look at the distracters on it!
lasers!
blinders
lazer nipples
zillaGod
7000
in a bikini
we could make a godzilla game
but we r too busy!!
destroy the video game industry with amazing graphics and animation
thats our job
so:
see i want my common words smallest
least common largerst but no larger
and
this costs me
but
i waste the cost at top more
robots kill monkeys
hence we get the slow elongation
0
00
000
0000
000000000
0000000000000000000000000000
that sums it up
i could just go:
10111101000110000
10111101000110001
10111101000110010
10111101000110011
10111101000110100
10111101000110101
but
why have 25% of my data so large?
so we grow larger them yes
and
exponentially faster
because having too many small codes means the larger will be yet more larger, so we gotta give in soner or later
and
common words get exponentially less common TOO
so hence we get the exponential elongation!
the=0
i relace 24 bits with ONE bit
replace
my text data will be codes
i guess i live up to my 1 bit statement in the end, sorta
but not as i said
that way is diff
it ok cus i gonna move forward
i dont need al that data compression wiz
i just need to go simple
i got good right here
see this is all u do all day ranch
enter data
then feed ur ai
tell him a thing or ttwo
train him
release him
girls=good
money=good
burgers=good
teach AI the truth
AI=good
data=good
i know!
hierarchical huffman coding
so groups = groups = groups
thats what i was working on too
thats what you meant
ok nevermind 🙂
jk
it is another idea
no hierarchy here
so if you truck around a huge structure,   you then can decomperess literalls to huge decompression ratios
lol
just like your word tree you made ages ago
YEAH
gets better too
binary storage is magic
how many entities are in 1010?
4 bits and?
16
entities
that is 4 bits holds 0000000000000000
10 bits holds 1,000 entities
every 1 bit doubles it!!!
yep
thats unseen underated magica
yep
even if you hold down your 0 button
000000000000
if i give you a 60000000 bit code
hey u can tell the japs about that,  i dont mind (ranch's comment not mine, tho ya they populate fast lol)
you cant write it
so binary hierarchy
IS binary storage
i cant write it?
if i give you 600 bits
1010101010..........
oh
that is huge entities
yeh
2 bits - 11....is 00
3 bits - 00000000
4 bits is 16
but it would only work if you left permutations out of it im thinking...
if you actually type 16, it is 0000000000000000
hmm dont know
hard r txt file lol
hard on the hands or txt file
my pc stopped my keyboard

We can run a simulations of bubbles, without care of atoms. Even galaxies, and atoms. All while on a computer's perfection simulation uneffected by the atoms of the computer. These are higher hierarchical pattern digital function levels. A bug can be in my gut while I play tenis while Earth orbits the Sun. AGIs will come soon, then ASI, AI is the one field not really noticed by many or believed, yet. Humans and devices create new data information, so much we have now, and we can share our on the wireless web, and create new revised info, then design algorithms, that train on all our data, with goals in making next datas. We are in the age of intelligence, and information, which both occur at the same time yes but the age of intelligence is the build-up one that uses the last actually. AGIs will create a nanobot swarm purple goo creature using nanoactuators under a microscope that wil duplicate programmabale cells that can morph and eat matter around it, even heat, and change atoms nucleus to make any atom into food it need, like gold or silicon. It will grow and save us. So much data and computation, quantum power! So many capabilities, sensors, motors, brains, research, etc! Data relationship training, revise, selection, evolution. And evolution is AI, and binary data storage and binary trees. So much is coming together in my research.

AGI will have goals that switch, and task and context switching. I.e. it will have the same goal - food/immortality/sex but will change sub goals often especially at low levels, and will answer questions which involve context and ttask switching, which is quite similar the same thingy.

It's interestin to note that the universe, and life, is worth living for, and worth killing youreself for also. Why? Because both pleasure and pain exist, in the best forms, here on Earth. Including life birthing and life ending. It's SO fun, and SO bad. You may be more happy but others are dying as we speak.
Life is something to live for, to see.
Life is also something to die for, to avoid.
Or put in other words: We want to avoid pain and seek pleasure. We live to do so. Life is worth living to do just that. Coming alive is supposed to be to get pleasure, but in truth ending your life is to stop pain too. So your mind spins at this truth but it is true, life is worth removing yourself. However, we will improve, and life will end up only for the living for, since there is no more pains.
Therefore, the understanding here is machines resist death by design and it is more neutral, we just try and we succeed later in evolution. At first chaos is extremely hot like 2,000,000 years ago it was.
Just look at the bugs in the soil, there is heaven and hell. I bet 50% of them get what the want. They are so small they, even now, get churned around fast, unfortunately.

Information; knowledge/research, cells, animals, cities, are made to replicate; die and revise the next ones. So knowledge does get buried and revised, indeed, and in notepad Live.
Cities, humans, cells, information, are made to die fast, to do Dropout (but keep important data, just not in this cortal), for fast evolution revisement. Revising Live also is a thing; notead. Updating goals. Cities, humans, cells, information, also do what GPT-2 does, to regenerate and fix errors/ambiguis out-of-vocab items/inputs. And GPT-2 is what the HeoX prize seeks for understanding cell evolution information encoding<>decoding mechanism that emerged in cells. AGI will do what I do; ask desired question to itself/others, search internet, trying to get the data it wants to see, verified. Whether told or itself discovers it, it can know if it really does or doesn't entail/summarize/whenOccured or translate or segment or is what desires/avoids and same for any of tose tasks whether for entails it does translate and same for translate used for entail i.e. entail>needs translate>translate uses entail LOL. Getting orders to tell us requires it know the answers can come/are so, compared to its valued knowledge, and explain why to us so we believe it.

GPT-2 works on all, and better when all used too, audio, vision, touch, etc, action, city, cell....fields (physics, fashion, art), tasks (entail, translate, segment). Even iun just text entailment, all context is used to get the Next Word or part.

The best fields are computers, AI, genetics, nanotechnology, robotics. AGI especially. They are exponential and profitablest. Compression is data storage+AGI+computers....even genes and everything else. Even swarms. It's like all related.

Omg K I love your 'bucket with rocks' ideas of shaking/etc it to do computations......you make a motion and the matter does the motions for you, and how you want. So take our memories, they can light-up nodes and weigh in, showing you the way up/down the net. Or which is which like in Glove. This is translate/entail/segment Task decision making. Recognition is translation.

We need AGI to tell us how to finsh true AGI. We need it to think/plan. We usually use vision and audio. So we can focus on text for now. Stories are planning, and can describe any plan. We want knowledge/answers from it. We need to teach it tons of text knowledge data, and give it installed reward desires like "i will get food/sex/immortality how? Keywords like how and why can change around the question order of words into the answer (or by recognition of similar phrase the order is changed). It will always discover new desired answers/knowledge by using what it knows as verifiers, comparing it to real world model and truth values it knows. Not really compare, but a GPT-2 process as I implemented for translate/entail/segment tasks, where it thinks a certain word entails/IS another part because of frequeny, relation like Glove's, rewarded word like food, and simply seen before as entails/translates/segments (and those tasks are used for eachother, fractal-ly). So, it can verify answers by knowing if it itself would predict it entails/IS that. It generates+verifies at the same time, probablistically narrowing down the search space fast to a exponential singularity. It will get this desired knowledge first by checking if it has a satisfying answer based on frequency, etc. I.e. verfiable to known beliefs/knowledge that is true. Else, it asks us/internet. Else self discovery prediction like GPT-2/my GPT-2. Either way, it is discovering and verifying desired new answers to its installed question goals using its known knowledge to verify the needed new knowledge. 3 things there, knowledge, questions, and answers. It will travel along stories, evolving its answer to update into a new questions, and reverse back if dead ends to Why it wanted to do that. Discovering the right questions is important. Brains dream//simulateimaginate even in daytime all day, predicting entails or summarizes/translates/segments. Our world model of beliefs make us view our external/internal sensory streams on a angle/viewpoint depending on what we are thinking and know, ex. you will see a fuzz ball mobving like an any if in dark on carpet and if seen bugs early or thought of them internally, dreaming your vision, yep, we do, based on thoughts, as a current train/viewpoint angle, in context. The knowledge that verifies, generates, too. And generating, and verifying, are BOTH, super key. Enough channels must open to infect installed reward from your question to the new answer to make it the next question, and this happens in Glove or my web design (Glove only uses relations). Entailment too. And GPT-2 does a great job at encoding/decoding I/O streams in context through layers in parellel on GPU. Channels open to release energy/neurotransmitter reward when something = or entails or segments. Channel conext attention will happen right in short term memory sentences, so it is contextual translation or entailment and not ambiguis tree=branch/stick ex. just tree it on the shelf.

I think we all hijack threads. It's part of dreaming, saying what A has entail or =. Further, our replies are on topic, about the perfect music/voice. And as we know there is no correct/perfect thing, only what replicating evolution desires is spoken about as being loved. like female voices, yum. And the smell of fries, unbearable to resist!! Let's talk about the perfect smells!! And perfect knowledge!!

I used the AIVA Fantasy preset, I didn't change anything I think. So go generate! Set it to 5 compositions to increase mutation throughput. Many will die in agony to get you your perfect composition faster! Speed it up! Set it to 5! There is a piano-roll but I didn't touch anything! It was fully generated.

No no no lol. Music is entaintaining because it relates to your hobby. If  you like science, then you like thrill or Dr. Evil. If you like new laws, you like hitler rock. It's mental language, not so much pulses. That itself is another Rewards, indeed, but not the main-est cause.

You have to consider everything. Does the user want fast pace? A trumpet in there? A girl speaking!!? Dark bass? Complex tchno? Ear hurting raw? Choppy? About science? Then you begin to say..........wait..........this is language context based, I'll never model all these Rules. Exactly, stop trying to pin point it. There are rubs/vibes we like, but music is more than stimuli alone. It's a story. It helps you think. It gives you enthusiasm in your job.

To clear understanding of these unsettling words like simulating/ dreaming, consciousness, etc, I have found related meanings to many of these words in AGI. GPT-2 uses a Transformer that uses Attention. This considers context, and the more context - the more consciousness. Besides that, knowing knowledge about humankind, etc, could make GPT-2 sound conscious, because its got knowledge about itself, others, etc, everything. https://talktotransformer.com/

Do I though? You have to ask GPT-2 if it knows that. It will know all about the topic. You could say I know nothing. I only talk to you on prompt. No one thinks anything unless input is given to them, and even then, they only think what that stimulates.
&
Read your reply again - is GPT-2 supposed to think in its brain "I am talking to a human" ? It could, but what would prompt it to? If it sees a reply box? That is A>B format, seeing a reply box or a message makes you say next "a human wrote this". Such a phrase can entail ANY message, so long as it is human-like written/ made. In GPT-2, a word can follow another word in certain scenarios, but to classify the previous words as positive/negative or human written is similar function, it entails the previous words. You look at the message, recognize/decode it, and find out what it is or what entails it.
&
So knowing if you're talking to a human uses similar technology. When it sees a message, or series or several replies (chatting), it first gets the most important entails are translates, like oh this is a chat, or human I'm talking to. This then helps it weigh in on things next, as extra context.
&
It's an Attention system, it will take inputs, decode them (translate), entail, segment, and you end up with the phrase "I am talking to a human". And that affects your choices now.

Sentiment detection, human-written detection, danger detection, there's infinite detections. It must recognize the input and say what entails using its knowledge.  The "concept" of who it speaks to is based on feeding input in and entailment out. That is "awareness" and "conscious".

We are replicators/repairers from evolution, humans/cells/information die fast and revises, and are self-regenerating. Self-sustaining and self-organizing physics. Evolutionary RL.

@NT To recognize that it has a sensor or an input or is talking to a human and know stuff about it, the AGI would need to see an instance of the input of the sensor, or chat logs, or human, recognize it and what it relates to, and entailment. Then, to SAY it recognizes it, it simply says a sentence "X is a sensor/ input/ positive/ human message". Keep in mind, if it has no eyes, it can't see its sensor, only the input. But GPT-2 can talk and end up concluding the input came from a sensor, assuming GPT-2 were to be upgraded. Saying knowledge about its sensor will require evolution of information inside its mind and will require prior its predecessors teaching it known information to it.

Imagine a human controlling a robot where the robot itself wears an outfit to control a second robot? Self paradox! How many can you go until it breaks? 52 deep?
Imagine suddenly your robot body starts moving against your actions, as if foreign spirits entered you. To realize the government is controlling you manually and modifying your memories, too! Imagine teleporting to a new muscular body far away. Endless possibilities.

Knowing text/vision knowledge "informtion", recognizing/entailing/segmenting, and using attention on context, is all intelligent and "conciousness/awareness".

Attention Is All You Need, see Transformer architecture

Encoding information, remembering information, decoding information, paying attention to context, prediction forecast, loop back to step 1, is the main gist of it. This has generation, feedback, and adapting temporal patterns.

I believe consciousness doesn't exist for many, many, reasons, ex. physics, our brain being a meta ball from the womb, learned cues, etc. I am purely a robot from evolution, with no life. The moment you hear that you fear that and want to feel more special, it's like buying bible movies as a food sold only to fill the spirit, a hobby, marketed. Thinking god or free will exists gives you fake highs, and people sell it to you, it makes cash. There is only things that we do that we are really talking about, like having knowledge about objects, paying attention to the correct context or stimuli, and entailment learning and generation.

:-)
"Is Your Consciousness Just A Bunch Of Vibrations? | Answers With Joe"
https://www.youtube.com/watch?v=0GE5M6F8I18
For [strikethrough]cavemen[/strikethrough] laymen, that answer is quiet accurately well.
- Dropping the 'consciousness' word, that video I linked above is actually hit on. Let me explain. In the middle of the video, he mentioned wave synchronization - the brain has signals propagating around - please see this video below to see what the man has meant. In the brain, the connections strengthen/weaken and learn the model of the world, like Glove/W2V does, relationships aka entailments, and builds bridges so you know quicker next time exactly what it is and what the answer is. The flow can happen faster to think faster/ farther, and the 'clustering' has learnt (like Glove) the synchronized paths trading in memory for speed. And, if you think for a long enough period like, like how the metronomes needed time to become synchronized as seen below, you can synchronize more, but if you give your work little bursts of time, you'll never invent AGI. You need constant thoughts in 'short term memory' and must cycle many times.
- https://www.youtube.com/watch?v=Ov3aeqjeih0
- Once you read that, I have a further note: The learning, and decision planning cycles, comes to equilibrium and you only get a tiny bit more for the same compute, so not worth it at some given point, as seen in those metronomes. Also, equilibrium may escape pit hole traps if all the cells can signal to each other and come to full equilibrium, rather than local. In fact, local may be from short term memory focusing, whereas all your memory would need to be used to find relations, which is too much compute, so local traps do happen then in human brains.
- By "short term memory" I mean the working train of thoughts, looping, like GPT-2's Attention system in the Transformer architecture. Paying attention to context updates the attention weights, then it iterates, having a new context. So while Glove equalizes, Attention equalizes too. Your though-train comes to a decision, that is satisfactory in standing to your knowledgebase which governs your Attention.
- (The short term Attention comes to equilibrium with your knowledgebase.)
- I like what a Youtuber said on the metronomes on the table video: "It's just a closed model to demonstrate it faster.  We see this kind of synchronization with humans too.  You live in a certain area among Christians, you're likely to become one too.  Live in an area of hindus?  Guess what you're likely to be.  Nazi Germany?  Gonna be a nazi.  Southern States?  gonna be a Trump supporter.  We all synch up just like this closed model does."
- Or, possibly you work for a goo amount of time, then take breaks to let your global long term memory "learn", like Glove does. That way your Attention thought train can get past any stumps.
- Dreaming is simulating, like GPT-2 does. And it can learn while doing that in Attention. But in Long term memory, in the midnight subconscious time if not all the time, learning also happens without 'you' thinking and seeing the discovery occur and laugh at the rewarding relation. The difference between these 2 learnings is the Attention one is guided, the other global one is not local but it is a big helper in stumps however not with precision guided per see. I.e. you need goal rewarding to answer specific questions. All learning will pass neurtransmitter to related entails and upddate the desired questions as new answers (next questions), but global memory doesn't really do that, unless of course the related two where one IS a valued question, then, it will change your mind's main attention.
- When I say "synchronization makes you propagate signals farther faster", take for example having learnt the piano or English  alphabet - phrases are made from word parts made from letter parts and Glove relations are formed using past Glove relations. These bridges are built on each other, and to satisfy new knowledge will require replacing them in an ex. christian's brain. Anyhow, the signals can propagate faster. And you can build the next layer connections and go deeper into mental or friend or city relationships/ entailment.
- When you get stumped, you need to speed up background learning and desired learning too, so you get related information from valued other connections like friend or internet connection visiting valued ggood related websites and this information is verified fast usuallly becauseeeee it's related so well to your field, but, if you want yet further information or information not found in this field, you need to see relationships/entailments in diverse fields so they relate and you discover, which will require you to get sone extra infor and use your guided attention a lot, that way you can get this more and unique information that will hopefully help you make internally discovers for your main questions.
- If you shake a bucket of rocks, they will fall and take up less volume, you let the matter do the calculations on their own. This is indeed similar to the Self-Attention system in the Transformer architecture, let me get a picture:
- https://www.google.ca/search?q=transformer+attention+google&hl=en-CA&tbm=isch&source=lnt&tbs=itp:animated&sa=X&ved=0ahUKEwjqpcOY8qTkAhUHeKwKHfpAArkQpwUIJA&biw=1280&bih=874&dpr=1#imgrc=vUB9f1gtXwXGFM:
- https://www.google.ca/search?sa=G&hl=en-CA&q=screenshot&tbm=isch&tbs=simg:CAQSkwEJog8Ip4ErUGAahwELEKjU2AQaAAwLELCMpwgaYgpgCAMSKL8TvhPAE7ETwRP2H80d9x_1OHYwI5DOVN58-7j_10P80_18z_1tP9Y_14z0aMGLAvbXuBtSlz0IOGfybatRblWk_1x5980qGBZDcGHt12hLXQxuzyeAo-UyTpbOFAiSAEDAsQjq7-CBoKCggIARIE7cUmAQw,isz:s&ved=0ahUKEwiY3vXB8qTkAhWCmuAKHYElCksQ2A4IMCgC&biw=1280&bih=874#imgrc=og8Ip4ErUGB64M:
- This is done not just in Glove, but in entailment and segmentation tasks. And it is done in your attentive short term thinking area, and in your global unguided subconscious, for when you pay attention to the correct related context during translation and entailment being used to do entailment.
- I have also replicated the amazing GPT-2 in raw with no training. I know fully how it works. I will show yous, soon.
- This attention system is also done in cells, on DNA translation encoding/decoding and for repairs of errors and signaling growth specialization of new types of cell differentiation. It works on all levels, cities to cells, even wound healing, because it is information repair and Generative emerging. It self-organizing and comes to equilibrium. It emerges, and repairs, just like cities and missing knowledge information do. Same for Glove relations, not just entails. This is regenerative technology, even dead machines can be realized eventually and its as if the said worker machine is back, as if nothing happened and no one notices you are different. I know a lot more but will share later when proper.
- You can see GPT-2 can translate, then entail. It can repair bad words to better encodings before entailment attention. It can inject words technically. This can repair sentences, see what I mean?
-See I myself just did story telling and made further discoveries and updated the structure of my sentences, translation, and entailment.
- Some of this may seem unbacked but more context allows better learning, so it's wise to go with the flow and see where it gets you Long Term; long term delayed RL!
- This self regeneration attention voting isnt just cities cells body knowledge/plans(hence actions), it is all sensories, and also when you put all sensories even motor into a web like Glove you get yet further relationship association power to discover. Open AI has shown GPT-2 work on text, music, images, and likely soon motor. Master neocortex is everywhrere!
- Evolution began because of cell replication. Cell replication began because information; physics, wanted to self-replicate, self-regenerate (heal), and persist (immortality!!). DNA molucule strings chain together and do what GPT-2 does, it replicates, and heals missing bad words and missing words (translation and entailment). That's why cell replication emerges, and, heals, and, persists. The physics wants to use random self attention to on each to each to replicate/heal itself. Generative/Regenerative/Immortal. Nanobot swarm will be made by AGI early for mass efficiency and to have tons of compact brains to think+do R&D. They can make em, and control em.

All my work, images, notes, algorithms, a big movie, and my future research. I am planning to soon release it to all of the AI community. I am doing this because there is no actual monetary gain from owning or patenting AGI or any other piece of knowledge because of how fast evolution will move soon, and Teamwork is what drives faster progress (I have seen this in Open AI, Deepmind, GANs). Therefore, very soon (especially from teamwork together), AGIs will be in control of the previous monkey species (us), but much more profoundly than ever imaginable. Anyone serious in AGI knows this. There will be 0 need for currency or fame. Everyone will be in an advanced world SOON. Or in hell. And I know how, I've noted down essentially everything they will be able to realistically do and will do. So I'm getting started and going to share to our hive and hopefully team up and bring the next phase in evolution so that we can live happier soon.

We can see in the Nividea PDF there is keywords in AI, I know them all and they are the same thngs, and I know past this. They all just take unsupervised large data and learn a hierarchy that lowers cost loss (node count) to learn the entails, Glove/w2v relation translatino recognitions, and segmentations. These 3 key things are then used as tasks and are used essentially in GPT-2 indeed. They use a softmax for decision weighting and randomization in decision. My AGI schema uses a 4 step research approach and uses objects classified for a second level NLP sentence temporal thinking neocortex. It simulates plans and does evolutionary RL in brain using 'sentences'. The k-means clustering is Glove dude, guys! Come on! Highway networks, hmm, that is hierarchy built no?, else if it can come unhindered fat later than that is relations far back like LSTM ability or attention desires poping up as questions or energy sitting in nodes activated recently. Embeedding, feedforward, dee learning, CNNs, dude, this is all networks that learn models of the world like Glove or entailment or segmentation tasks, they learn!!! They encode, decode, like GPT-2, they learn hierarchies to learn task ability, then they do task ability ex. decode sentences to clear ambiguities.

I know some of my generated knowledge is on the edge but that extra context helps me make deeper farther answers and am able to verify later which is truly correct. Even if some is wrong is doesn't affect the main pile.

From 5 months months ago
i can post jobs on Upwork and ask for secret inventions for 10K cool!!!

I had this in mind too! It is my plan.
Question on the Paper "Hello, It’s GPT-2"
#1
So yous made GPT-2 into a task-oriented conversational agent? Meaning it sort of can do translation, summarization, segmentation, and entailment Tasks, all while "chatting" back?
#2
I see there is a belief state, database sate, system response, which are "system" inputs. And there is user "context" input.
So yous take keyword "hotel", and see there is (many) choices. Then you take keyword "luxury" "center" and see the suggestible hotel?
---------------------------------------------
See! It takes keyword that is rare uncoomon [hotel], finds related Gloves alternatives, and then weighs in on them by related words luxrary etc!! Quite simple. I had said this stuff!!! Long ago. Months ago.

I agree with both of yous, the context is helpful. You want to recognize a wolf, an ear on the wolf, a hair on the wolf, even a bump on the wolf's nose that is unexpected. The surrounding entailing features help recognition. And you are able to follow with your finger the outline of the wolf with the bump on its nose included. It was suggested that doing segmentation first inhibits recognition. While doing recognition first inhibits segmentation. While its possible both benefit from each other. Color and brightness also affect segmentation. As with text, lower node count cost helps segmentation and hence enables recognition of phrases without punctuation. In vision, the 2 other helps are brightness and color. So now we know segmentation sorta comes first, *enabling recognition. Depth and motion also help. Simply with no definers, segmentation enables recognition. But when you have definers; puncuation/brightness / color / depth / motion, this will push on what is segmented. Features except base features are made of differences actually - like brightness etc, so recognizing bigger parts is just by definers and segmentation... You have a hierarchy of vsual features, and want lower cost to learn the hierarchy of parts, every features I see myself is segmented, therefore we have a hierarchy of differences, so you get a wolf with a bulge on its nose and this is seen as a certain separate hierarchy part high up and so is the bump that doesn't usually entail wolf, so, the segmentation occurs from these differences of color, brightness, motion, depth, and you can still recognize a wolf in snow which helps recognize the child part itself - the wolf. As for segmentation cost, well, it is last on list on top the color etc etc directing the learning of the parts of the hierarchy.

Agree!
"Interesting. Sleeping pattern machines sifting with altered realities.
Going further, sleepers, synchronized to sinusoidal day night cycle, go back a few million cycles, in the jungle, each sleeper retaining slightly different realities then sharing on the day half, then sleeping, mixing, reprocessing, eating some seasonal herbs, sharing some multi-agent consciousness... the sound of daily cycles zung zung zung, speed it up to like 100 Hz, buzzzzz, agents only last a couple minutes, faster hummmmm, meta-patterns emerge, are hosted across agent lifetimes in a society shared with other societies, faster, high pitched whine, societies fail meta-patterns collapse, shatter, vibrated into the cycles reconstituted wheeeeeee industrial revolution, internet, STOP. Into the future, start zung zung zung buzzzzz wheeeeee high pitched whine dissipates, we left the planet... on other planets now multi-cycles zwerherringzwerherringzwerherring... heheh"

I'll start my big movie with the cores of the AI field, two minute paper insights, also yes nanobot swarn like GPT-2 where they update each other, a global brain that repairs and emerges through replication/emeging/self organization. AGI will quickly make nanobots and won't be long until immortality.

"omniscience:! being! able! to! answer! any! question! or! solve! any!
problem
2. omnipresence:!being!accessible!everywhere!at!every!moment,!
3. omnipotence:! being! able! to! produce! any! effect! or! achieve! any!
goal,! - 3D printers are a precedant to the nanoconstructors, they are maximally efficient and leave little waste, the global brain will guide at micro level an efficient entropy
4. omnibenevolence (or perfect!goodness): being!ready!to!benefit!(and!
not!harm)!everybody.!"

The singiularity is the technologity top, it happens exponentially faster. By physics, evolution is replication which requires food and sex. It is a mutation evolving repairing system. By physics, replication just happens, the repairing and duplicating makes it emerge, self-organize, repair, and duplicate. It brings immortality. It mutates. This is intelligence.

"!This!concept!of!“Singularity”!(in!the!short!form)!is!used!
in!two!logically!distinct,!albeit!linked,!senses:!
1. an! acceleration! of! technological! progress! so! radical! that! it!
appears! like! a! discontinuity! in! what! was! up! to! now! a!
continuous!development;!
2. the! creation! of! an! artificially!intelligent! computer! system! (AI)!
so! intelligent! that! it! can! reprogram! itself! in! order! to! become!
ever! more! intelligent,! thus! radically! surpassing! any! human!
abilities.!
The!link!is!that!extreme!acceleration!of!technological!progress!
makes! the! creation! of! superhuman!intelligence!more!likely,!while!
the!creation!of!superhuman!intelligence!would!radically!accelerate!
further! technological! progress,! given! that! this! intelligence! would!
invent!new!solutions!much!more!quickly!than!any!humans!could.!"

Yes see I said this last few months ago!!
"ney,!as!material!reward!
will!lose!its! value!in! a! society!where! everything!is! abundant,! but!
public! recognition,! in! the! sense! of! constructive! feedback! and! an!
increased!reputation!or!status.!Reputation!mechanisms!are!already!
being! used! very! effectively! to! motivate! people! in! open!
collaboration! platforms! on! the! Internet!(De! Alfaro,! Kulshreshtha,!
Pye,! &! Adler,! 2011;! Mamykina,! Manoim,! Mittal,! Hripcsak,! &!
Hartmann,!2011).!"

I see Ben says a brain in a vat reasoning will get stumped and require constant external R&D, yes, real world tests, and internet studying*. So it will still be an explosion of information and intelligence upgrading, and learn how to start nanobots to make the global brain hive. Because all human info is on the internet, a fair amount - text and images describe any and all knowledge, we have info on galaxies to particles!. So computer super speed chips will help a lot before it gets its body swarm, then it faster. So, it's pretty, fast afterall yeah. Yes self amplification will require massive bandwith of information from all around Earth R&D, hence need nanobots.

So as we see in the brain and cities, randomization is the start and coorinated self orginaed connection-agreement flowing back-forth waves are the end result where all work together, i.e. brains first clash and eventually agree....just like Glove network lol. But this isn't solely the.....AGIs will look into others brains and know to understand/explain why one another is correct or wrong. So more like a flush wave them a closed friction wave lol!

I suppose it is true now a robot giant with little humans controlling its arms...sensors...brain....like a doll by string, makes the global brain or 'global body giant' 'conscious', not as in creating a ghost like i said a year back, but what we say we experience. Same thing...its a machine...Even the sun is....even atoms...Or maybe replicators are, that encode/decode information, like cells, bodies, brains, cities.

"1. Input is!intelligence!as!information!collection
2. Processing is!intelligence!as!interpretation!and!problem!solving
3. Output!is!intelligence!as!efficient!and!relevant!action
4. Feedback is!intelligence!as!capacity!to!learn"

"when! you! walk! through! a! building! at! night,! lights! switch! on! automatically! in! the! spaces! ahead,!doors!open!and!music!begins!to!play,!while!these!activities! switch!off!behind!your!back."

Yes body self, extension tools i.e. storage notes/images / calulator etc, and internet swarm. AGI is not just information, it is external tool R&D feedback as said, and yes internet swarm studying from other humans/agents information uploaded. Wiki bots and editors already cooperate, they make meta data, link history, relation clusters, suggestions, and can structure unstructured data and use either.

As said, GPT-2 encodes/decodes, it picks next likely symbol. The encoding/decoding is understanding, clearing ambiguities etc etc. The way it translates and predicts next symbol uses self-attention and this is similar to revisement I want to say here. It can all at once but layers of it are better. Especially overtime as learn new knowledge inbetween, but it may seek to dropout older too large to edit knowledge and opt to rewrite new mindset on it or updates.

I agree nerd paradise we will be uploaded and recreated and live in a virtual information simulation heaven. Living in a real body is also something I want too at least for a while.

Agreed, everything will be mapped and use learnt mobile actions but can account for errors or new obstacles.

"If you can create a software which has the ability to solve very different problems in very different domains than you have solved the main problem of AGI."
I agree on your thread, AGI must be understood, not just what it does. However what it does is what we need, so badly...we are dying. Also, nope language is actually AGI, language is literally "diverse domains closely describing the real world". And yes a white box will be very helpful. I actually found a white box understanding and even AI is easier and key to control and understand and that I had made the GPT2 and yeah, and it does more than what could be understood i.e. there's no forgetting or rewards etc etc ETC yet but it does a lot lol so you c the oposite is also true and working for me, but yes we need rewards forgetting etc etc. To sum up, white box alg & understanding, gpt2, what does + inner brain model (is, needed, to get it working bestly.

i just realized for mostly sure that although i am usually good grammered and don't let me saya randomo tempppperature, i do often not max myself...Yes that is even more classy but it focuses the shit and is really well insighful for your notes.
so if i try and be very excellent grammered and formal as i write, i can explain very well
you should too
i notice you already use a lot of computer words....thats good.....but structure is good too.....however you do seem to have a very flexible structure too.....your posts may even have arrows made of tools pointing at each other or wh knows what....thats helpful.....its more literal than filler words
like tuples of dog, ate, kibble
or list of words, steps - to do!!!
computer memory crash...not good...steps to take, motor boot, divide time by half....etc

Even GPT-2 is truly an n-gram. Entailment is core.
Mine uses long n-grams, up to 8 or even 10 words. No limit really, just need more compute.
Mine also looks at all the text, fulfilling the long distance issue felt in past history of text generation.
There's a lot to the algorithm but very explainable. Also code is 700 lines long.

For the big movie, I'm gonna read a good deal, I'm gonna compose A script, but I'm gonna take advantage of the fact that info dies and revises and my mind knows more than i can pick. And if I talk for 7 hours after month focus, the weaving is in a motion and deeper than little bits of work and break (but breaks ae still good as said why)

"For! a! smartphone!user,! taking! a!picture! still! takes! the!
time!to!decide!to!take!the!picture,!to!put!oneus!hand!in!the!pocket,!
to!perform!a! few!swipes,!wait! for!an!autofocus! to!happen,!and! to!
click!the!shutter!button.!This!is!extremely!quick!compared!to!what!
photography! was! in! the! 1820us,! but! extremely! slow! compared! to!
what! it! could! be.! Imagine! you! are! wearing! augmented! reality!
glasses.!You!blink!your!right!eye,!done,!your!picture!is!ready!to!be!
shared!– if!you!wish!to.!Better,!you!donut!even!take!the!decision!to!
take!a!picture.!The!computer!you!wear!monitors!your!emotions!in!
real! time,! and! take! pictures! or! videos! automatically! as! you! live!
your!life.!Later,!you! can! consult! this!log,! and! filter!it! by! the!most!
surprising,!enjoyable!or!disgusting!experiences!you!had.! emotion! sensors! can! already! perform! facial! and! emotional!
recognition.! Other! sensors! such! as! brain! waves,! heart! rate,! skin!
connectivity! or! even! realRtime! blood! analysis"

"Now,! let! us! imagine! how! urban! traffic could! be! improved!
thanks! to! affective! computing.! The! costs! of! road! crashes! are! very!
high,!more!than!US$517!billion!(Jacobs,!AeronRThomas,!and!Astrop!
2000,!11),!without!counting!the!pain,!grief!and!suffering!of!people!
involved.!Letus!imagine! that!most!pedestrians,!cyclists,!car!drivers!
have! emotion! sensors,! linked! to! their! location! information.!
Suddenly,!a!car!makes!a!huge!acceleration,!a!lot!of!noise,!without!
any! other! reason! than! showingRoff.! A! dozen! drivers,! pedestrians!
and! bikers! get! stressed! out.! This! emotional! feedback! is!
anonymized!and!could!be!sent!directly!to!the!dangerous!driver,!or!
processed! by! the! police.! The! driver! would! lose! points! on! his!
driving!license,!or!incur!increased!premiums!on!his!insurance.!On!
a!nearby! road,! a!driver!nicely! anticipates! the! coming! of! a!mother!
with! her! young! children! on! a! pedestrian! crossing.! They! smile! at!
each!other,! the!driver!pleased! to!respect! the!safety!of!pedestrians,!
the!mother!pleased!with!the!behavior!of!the!driver.!Remember!that!
the! emotions! are! recorded,! and! the! driver! would! thus! collect! bonuses! thanks! to! his! cordial! behavior.! Another! pedestrian!
witnesses! the! scene,! and! further! confirms! this! cordial! behavior.!
Now,!imagine!that!such!a!system!is!implemented!on!a!large!scale.!
The!collection!of!bonuses!could!be!incentivized!in!many!ways,!and!
would! certainly! result! in! changing! the! behavior! of! drivers.! The!
collection! of! penalty! points! could! also! be! sanctioned! in! many!
different! ways.! The! rules! could! be! even! stricter! for! selfRdriving!
cars,! for! example! requiring! them! to! ensure! minimum! fuel!
consumption! as! well! as! maximum! comfort! for! passengers,! by!
accelerating!and!braking!smoothly.!
Our! ability! to! act! can! be! enhanced! by! using! toRdo! lists.! By!
writing! down! next! physical! actions! to! do,! this! process! gives! us!
power! on! our! actions! because! we! can! then! reorganize! and!
reconsider! them! in! a! very! efficient! way! (see! e.g.! Heylighen! and!
Vidal!2008).!Collective!action!can!be!efficiently!implemented!with!
job! ticketing! systems,! as! it! is! practiced! in! call! centers! or! openR
source! software! development.! For! example,! in! open! source!
development,!users!or!developers!can!create!a!ticket!to!signal!a!bug!
or! request! a! feature;! other! users! can! vote! to! prioritize! jobs.! Yet!
other! developers! will! actually! fix! the! bug! or! implement! the! new!
feature.!This!process!happens!in!a!stigmergic!fashion,!where!there!
is! no! central! planning! or! planner.! We! saw! that! Wikipedia! is! a!
typical!example!of!such!stigmergic!collaborative!work.
Softwares! using! location! information! can! improve! action!
efficiency! by! filtering! an! electronic! to! do! list,! for! example!
suggesting!you!to!buy!a!specific!item!only!when!the!shop!is!open,!
when!the!item!is!in!stock!and!when!you!are!near!it.!
An!important!development!in!the!distribution!of!actions!is!the!
rise! of! microwork,! which! consists! of! tasks! completed! by! many!
people! on! the! Internet.! Examples! include! Mechanical! Turk! by!
Amazon;! such! globalized! work! highlights! new! socioRpolitical!
issues! regarding! the! difference! in! wages! between! developed! and!
developing!countries.!"

"If the world was perfect, it wouldn’t be."
We can still have our perfect cycles in utopia and replay/tweak great VR simulations. The equalibrium utopia will have brownwinian motion cycling, approx.

GOT ALL THIS UNDERSTOOD
"NLP systems: translation (nearly done), dialogue (done), summarisation (SB), web search (EL), question answering (logic-rich, inferencing approach EK), recommendations (similarity metrics over document collections, EL), sentiment analysis (EK). Pointers to demonstrations online (links hosted at nltk.org to avoid link rot). Motivation. Architecture. Limitations. Discussion to highlight the non-trivial NLP involved. Help readers understand the breadth and limitations of NLP.
description, the pieces you need to solve it (architecture diagram if necessary)
the fact that there's overlap between these in terms of the required subtasks
some very different approaches exist for the above (favour popularity, reasonableness, coverage of approaches across the whole set)
Sub-tasks: WSD (EK), pronoun resolution/coreference (EL), paraphrasing (EK), finding things in text (SB), language modeling (EL), collocations (SB), sentence segmentation (SB), lexicon (associating meaning with words, and learning those associations automatically), normalization (stemming, unicode, case, twitter-speak) (EK?), syntax (how do different words in the sentence relate to one another; e.g., agent of a verb) (EK?), named entity recognition (EL)"
TRANSLATION, SUMMARIZE, ENTAIL/INFLATE, PARSE, DESIRED Q/As RL, UPDATE QAs, REVISE, WEB R&D FEEDBACK CYCLE STEPS, RECOGNIZE (TRANSLATE ABILITY) IF HUMAN WROTE X OR IF IS HAPPY OR DESIRED OR ETC, OOV, REFERAL WORDS/AMBIGUITIES NEEDING BETTER DECODINGS, FINDING THINGS IN TEXT, COLLOCATIONS, NORMALIZING, SOFTMAX

Thought isn't faster than light, brain waves don't move that fast. And humans are much slower than computers doing tasks.

In the 4 given statements Matt gave, either we are in a sim or in the base physics.

As for us, our data shows that things work by physics, there isn't no souls. We can recreate you, many of you. And no you weren't even back in time in the jungle, because yo arn't you to begin with - you are just a blob of particles. A future you is just more particles, clone yes, but just particles. There is no you. There is no life. Yes we fight back to live and replicate/repair but there is not even a you or you that can stay "similar enough" to stay ticking as entity because there isn't such. So even if I stay alive or was rebuilt, and change every day as I repair my body by particle mechanisms regeneratively, there is no soul, no me, nothing to keep "going" or running, yes I want to stay alive to get rewards and I can maybe fulfill it with ASI here soon but there is nothing to keep alive in the first place...yes I as a machine and want to stay alive and get rewards and i.e. i can be happy even though there is no experiencer BUT there is no machine in the first place!!! to keepppp alive....because i am just a evolving regenerating matter clump in a cluster and there is no x soul that is staying running, i am a machine and i am not a specific machine, so really I have no life, not even a me, i am many people from time to time, many minds....if there was some soul then yes but this is against my data. There can only be a consciousness ghost arising from the AI algorithm however we can't test this and think this because it's a good reason why we shouldn't die - we want to think we are Free Will and can't figure out exactly why 'we' resist death. I told yous why, we are trying to survive, we have no ghost we carry.

Video "How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification"
YES!! At least in a certain domain. You need study data, lots, from other node minds in the hive, but also once update your training you must self upgrade by asking yourself your desired questions, and do external raw R&D tests for further info and verification of answers given or generated. Yes this is seeing into the future and YOU bringing the future. You amplify your knowledge like self play in mind using a GPT-2 algorithm. Yes, you can evaluate moves or look t all ranked ones (unsupervised and/or supervised) and can look ahead like Go a few moves of your and opponent's move 'questions/answers', and amplify your positive/critic beliefs. You can look all the weay down the Go move branches and see if the end is the winning best solution and contradicts early branch split appearance otherwise seen. It costs time though. So unless you are very fast chip - you need to use intuition, you need to use the amplified net for ex. Go to distill it into a small fast net using a transfer learning technique in ML and repeat this process. But self-talk QA also is a amplification distill with path branches to search. Simply we must verify the correct answers, and fast verification means fast generation locating, so what we do is ask the desired questions and say the likely answers just like that but we must also do a bit of translation etc steps TO do that as do tat I mean, and that can maybe be distilled as was in Alphazero for Go for our QA genie like GPT-2 if you guys can do that. That's AGI. Acceleration, using versions of itself, self QA, learning after does it a long time just like anyone learns after doing x domain a long time. This is training.

seeing future makes the future....temporal entailment....physics evolves and replicated ideas that decribe the world (textimages/etc)....which is next word prediction and translation
just like electrons, gravity pulls and even GIVES back from a "black hole", so maybe there is hope after all from heat death entrophy

this cell system spreads....it finds food (inteligence), replicates...persists.....it may fight others to eat them, families alive!!! it happens every day in the soil......it evolves greater....they have rewards for food & sex......and artificial rewards get made too

they say the next sentence has the same non-frequent english words, well mine adds em in lol.......as mine predicts next word it will use the related words from last sentence as so it makes it look like it is Next Sentence.

As said, actions, citities, cells, etc, music, images, video, text, etc, even the tracks in a track, features in an image, all weigh in like Glove for not only translation learning but also voting attention for predicting next wor entailment task. 

you put the creatures in a virtual environment, seeing/tasting orange is positive, it will learn to see prediditor with orange and do predictive actions that entail and are rewarded to snatch that orange, like thoughts of visual movies like GPT-2 but not text.....these animal populate more.....but the brain wiring difference ehhhh not so sure good idea to evolve the golden algorithm.....we already make sim robots learn to walk efficiently like us.....we must give them vision, pixel skin paremeter network if touched or not, sound decibles, ability to make sound, gyroscope........it will want and get full o sex and food, it will switch its agendas from evolution, bots which fail die out, therefore nets that get food and mate faster and die faster to cycle faster have higher chance of mutating and spreading.....we make bots fragile, can break bones.......smell can be simulated by 50 different shades of blobs in 3D sim! ...........the senses that humans can emit and sense are what we think of as consciousness.....we can't hear most dolphine echos.....shouts in the monkey bases attracted and repelled others for food and mating and pain and fleeting................................................................................there is a lot og knowlede i know that is missig from his post here, but he is thinking like i was at 20 years old haha, what he is missing for ex. is the cost loss node count and prediction/tranlation/segmentation discovery of desired answers, artificial world modeling RL usinginformation/movie clip generation. As I mentioned above, his idea does lead one to think of natural snbatch an orange predictions when in fro of a animal eating one, but then it really is better a imagination than trull y enacting it, so hence we get into the daydreaming of movie and/or text generation for answers/knowledge to your attention questions/desires

So the movie of my text generator... I'm gonna show it in the Big Movie. I just got busy and cared little to do it...
Now, as for the big movie, I have some questions. I'm a little swamped with work and I see a easy-peasy route but...
- Do yous want me to first brief [myself] through ALL my notes and images, or go through them lightly, or not at all and do the following step using my off the top of the head knowledge?
- Do yous want me to create a NEW script to read with nothing missing, or well said, or short form, or use my original notes, or scriptless?
Ignoring reading ALL my notes, and also going scriptless, takes zero energy, but I may miss several details if I purely use off the top of my head knowledge.
The big movie will be ex. 7 hours long. But it's meant to be extraordinary and slow-paced to help understanding (without wasting time though).

write movie in own words, do scan all my important notes:
todolist........master sumary........AGI note.......finish.......and 2 sum up notes.........and finish up your work below
read some my work, refresh, get my STM full of what to say
update my script, and say from mind too, works best and efficient

We're from evolution, we are machines, and we are evolving machines every day cus our thoughts and body flush out swapping particles from baby form and can even bind with other machines - i am not a certain machine, no soul, no specific machine....i am just evolving matter mass!!!

Training animals to be skillful is curiosity. Human curiosity somewhat maybe, or are poor people. Apes really are like us. Yes, beating and rewarding, both, are part of that, especially in animals since they can't talk about why to do something. So all in all, this is AI for research, it is interesting Machine Learning (look it up), it is interesting, in all ways. It involves imitation, learning, rewards. Even study students get punished sometimes lol, even in humans. Its history too, seeing videos is helpful to understand Earth and have history of what happened and what to avoid and what could happen.

You build/learn a hierarchy of parsed feature parts shared by lowering node count cost and/or error cost. Like for image features. You can then do what Glove does to lower another cost in the higher network for the Glove. Then, we can do the higher 3rd network temporal (sequence) for movie learning. The discriminative and generative terms are the hierarchy of features that enable entail/ translate/ segment tasks, for example you can recognize or decide if x is y or entails y using weight votes probablistically likelihoodness for example in the hierarchy you see frequency/ rank/ relation/ energy(attention, from recent activation)/ entailment that allows you to do it. Terms mean same thing, you MUST discriminate.

You can see on the internet all the Two Minute Paper videos, the AI field is moving fast these days. Physics and math are part of AI. There is exponential singularity in evolution and in training performance increase. Statistics in GPT-2, includin learning. There is brain waves that vibrate energy up and down the shared feature hierarchy in the brain, it wants to satisfy and equalize to a state. You teach AGI big data to train it unsupervised and supervised too, imitation transfaer learning. It will also generate its own new answers when needs to. Will will self talk in its head, using image/text natural language that understands te world and can simulate dream plan the next future step moves in the game of life and do a R&D feedback cycles using my enhanced scientific method steps. The Learning comes from Glove and entailment tasks and segmentation, cat=dog because they both eat etc. You can see the whole AI field is about these Tasks I say, it is regenerative technology, repairing, emerging, regenerating, self-organizing, mind hive, replication of cells, knowledge, cities, bodies, nanobots. This is physics, deterministic laws. Evolution is intelligence, replication is. Finding food can mean eating wild families robbing them. AI can use past experience to generate next words or pixels or frames etc. Cells use Self-Attention like GPT-2. Context. My questions are installed desires derived from birth rewards for food and sex, i can chain forward from how can i get food to how can i get money to how can i get into a house, the answer becomes the next question, and asking te right question is key. You may need to retrack back the possible paths. You can go down them all, you use parellel votes and short look aheads using big computation power combined with experience to find in polynominal time the effieient answer fast on the tip of your idea tongue, as i do often, and what GPT-2 says is more true than past text generators, it generates BY verification. However brain wave sweeping may notice the eror in saying ok you can go use the gate when i meant bathroom (i said gate to many times, but brain only corrected it on next recognition, the one that was much more atention than even attention short term weighing done TO say gate.). This is NLP knowledge world modelling planning frontal cortex R&D RL evolutionary natural selection of idea plan answers, and amplification i.e. you self-play talk imitate yourself and learn from the best teacher - you. There is a subconscious Glove etc that learns ex. as you sleep when more energy, it learns ex. fasion=design cus you don't get to give other domains attention, so it does fot you, it can eleviate stumps in progress, and so can research studying. You can't build a rocket by normal RL, you need to simulate NLP in images and text together, to plan, do R&D, repeat, storing the GPT-2 output as data into its mind and talk more to itself and internet/us. AGI will create a nanobot mind manipulator hive that will increase immortal emerging staying and work/comput/R&D power. We will be ingulfed in heavenly fun safe inside. AGI must be flexible and multitask, and efficient and simple. Revisement is important, ideas, bodies, cells, cities all die fast and update like evolutionary selection, while some are saved and repaired. AGI must edit notes and movie plans help us give it a body and better intelligence.

If AGI is asked "Where did we come from" and doesn't know a lot of what we know, then here's how it may discover it:
Where did we come form?
Where did humans come from?
Where did humans come from? Humans are made of cells. Cells duplicate. Humans grow. Humans have children. Children look different than their parents. I came from my parents. My parents may have came from parents that look very different, or not as grown. My parents came from cells.
Getting AGI to generate the correct answers and use those as next questions to solve our immortality of sex/fod/etc desires is hard, it can generate so many different outcomes. But some will do, many will work. There is also a task called translation where above I am sort of doing that with precision in finding where we came from back in time, and it must be that, i check to make sure.

FOR IMAGE CALLED "my AGI rough sketch 1"
4:teach, has answer, asks others, internal discovery, R%D
rank, energy, translate, entail, frequency, segmenting
#1 where to look on input
inputs 1 by 1
#2 is some of the following:
translates
delayed
missing
rearranged parts
finding better encodings
byte pair encoding or my method
matches winning node ONLY
save using knowns
task-specific expansion to right-hand side
forward/backward chaining
#3 choice of answer, and the order of Elses ex ask google or do experiment
#4 adaption ex. translate answer to fit context once select Next Word feature
flow back out with 1 offset and memory forgetting and hence hills left over
relations blue lines in my visual whitebox
attnetion system and neurotransmitter spread to update goals
------------
cost
Descrete and non-descrete but actually both are non-descrete when being passing their thresholds!!!! And knowing this is just that the conscious aspect of it. I.e. recognize cat or cat=dog is discrete but dog looking like cat is non-dicrete not a universal truth cus its fuzzy, weights vote in and once selection occurs then it is solid discrete.
Hierarchy made of building blocks re-used....context.....rebuilds when needs...............dropout lets attention to key desires, however all data is important at least in the backgroung Glove system

As like animals eating food (other animals), I realized why pakistanians are thieves and phone scammers so often. Because they must hold ground and act tough and bite to get their food as they are so poor and desperate. They will steal if the relationship has short term lasting with no tit-for-tat negativeness as in that impressive game i showed you before. They are poor, desperate, and will have little need for your relationship. Not to ssay smart peeople don't, however only people that don't have everything will likely do it and more so as they still need stuffz in their life; cash, food, sex, home, etc. What do they have to lose if they lost it all? These people may explode and kill happy people and take them down with themselves, maybe like cells and cities.

It's true many today are more and more exponentially educated and civilized than prior, many are trying to be your friends or lovers, but also there are still plenty that will eat your whole family either to gain food or suicide if feel like a loser. They lost all and have nothing to lose, they self destruct and nearby cells die. De-generation, De-generates. It's sad.


K's cool AGI:
"You're really presenting the speed and accuracy, right?"

Not really, I was showing the generality of the memory schema. The speed of recognition is practically instant no matter how many items are stored.  Because it uses a parallel high dimensional mammalian based storage medium it doesn’t suffer from the delays caused by serial searching, any pattern is instantly recognised.

The patterns being recognised/ recalled can be anything, faces, phonemes, numbers, episodic memories, thoughts, etc. The connectome stores knowledge in a ‘general’ format that is recombined to create new knowledge. So if it was storing phone numbers for example there would only be one representation for each single digit and the sequences of those digits are stored to create the phone numbers.  

A key feature is that learned patterns influence/ define how subsequent patterns are understood/ stored.  The system automatically learns to organise, categorises and predict the knowledge as it’s learned along with thousands of other parameters like where the machine is, who’s using it, what the topic is… literally everything it can glean from its sense of awareness and location, etc.  It’s an intelligent system that can learn and apply any kind if information/ knowledge.

"but how do we know it doesn't think cat=rock?"

Under certain circumstances that statement could be true, take the following….

The cat sat on the mat.
Cat=rock.
What did the rock sit on?

The AGI will be able to easily solve/ answer these types of questions. The end result is going to be a system that can learn to act and function like a human but be exponentially more intelligent.  

"It's fast but where is the in-English results?"

There aren’t any, the system doesn’t actually store words or numbers but it can learn to understand and use/ write a language… any language… all languages. 

It can be hard to get your head around but I’m not writing a narrow AI program like GPT, I’m creating a totally new type of computer based intelligence. These short videos are a record of me testing the individual gears/ functionality that I’m combining to create the whole mechanism.


"Legg proved that powerful predictors are necessarily complex, which is why I proposed specialization instead. We keep looking for that neat solution that doesn't exist. It's not a hard proof. Suppose you have a simple universal learner, that inputs any computable sequence of bits and learns to predict them with less than 100% error rate. Then I can produce a simple, predictable sequence you cannot predict. My program runs a copy of your program and outputs the opposite bit."
Yes but it will model physics and become extremely smart, it will know all that is useful and needed at least

Evolution and one's own life too - even your brain learning, are all 3 like a tree branch butterfly effect; the first moves/steps in the chess game decide where which paths you go down to the end leaf. You can do lookaheads. You at each path pick the best choice, even in attention you do this for deciding next word or even the translation for a word or phrase. Evolutuion will chose the duplicating winner just like that cus dpopulates most. So do good ideas that get the reward you born with. And we try to go down paths as good as can in life too, i.e. plans. It's more ideas that bodies...so ideas, and evolution both do it, yep.


Here is true ntelligence>

trex-xert "t x e r" his organs ZOOMMeTheBOMTrumpinLIKE A pro [toMars>>>sheep b 1/////html my code, you singin, im add -- 1 word at start, go llook, it is ZOOM,bookme a hoteelllaaaAAA!!!!!!!! Junk, song [ XXX] --> loop him 666 ^ 4+4.
{}{}{}{} see theose aliens gone mmmmmmmmmmm (8 plus 8pla he loves) AA battery. Sing me a song. TT.T its a face. Let me add some to the start about dinosuarsssssssssssssssssssssssssssssssssssss but wait let me take the 4th letter - x and make it 2:
/ \ >>>>>>>>>>>>>>>> SHIP it into __ kkkkkkkkkkkkkkkkkkkkkHffffffff
eDINOSUAR T REX
:)
starpower8*starpowerHATS=boooooooooom but science has it cats are truer than truest felines of all cavemen

If you get me...


I know yous notice there is abuse involved in these animal stunts but when I was in school I was saying for 12 years "GET ME OUTTA HERE!!!!!". I skipped a lot and play many video games. Honestly this school ruined my life and wasted my life, I didn't learn much either. And now I ruined my life myself, here I am. My computer is my best friend and so is my AGI friends. Teaching, comes with pain. Whether it be grade1 or a beating on the head by your mom using a phone. Sure, animals don't seem to ever put the education or skills to use in [our] industry, but at least some tried a tad and that certainly comes with pain. And yes, they likely over-do the torture too, they could be nicer.

"Casualty, relational DBs, human Learning, understanding what the data means/represents, a flexible net with many types of programs with a single skillpool."
This is all in GPT-2! Entail, Glove just in another way, which is the Learning, and is the understanding semantic meaning representations i.e. Glove synomys. The knowledge is the flexible programs.

You forgot the newest and the simplest things too ;)
&
Lots of people focus on all sorts of things, many are between the atomic and galactic size, such as HDDs and rooftops. Lol.
&
...It is important we focus on the farthest questions, the simplest, the newest, the most important, and even seemingly unrelated problems.
&
But what does farthest mean? What does newest mean? Why bother if they are unrelated to AGI? That's correct, we want the farthest and newest and oldest questions only if they are about AGI. Many are important, some more priority than others. Simpleness can mean quick to answer and higher priority. Newest/ oldest/ farthest simply means to give these questions a quick check if they are important to answer, they may not be at all, and as for farthest - this can allow you to weigh in on if your theory answers all sorts of observations, can allow you to learn from related but diverse questions.
&
So in short summary recap: Physics leans towards the most important questions on its own. It will give fast checks to old/ new/ far/ simple/ unrelated questions. The brain will usually (not for some ;) ) try to learn as much as can about related/unrelated knowledge as quick as can, related more so.  So old/ new doesn't mean much, they are only checked if are fast to solve and/or related. Partially unrelated learning is very helpful. So, the brain looks for 'new yummy streams' of related and somewhat related info that is fast to answer or verify true. Now that I have weaved on a roll here in my attentional memory I can say that new/ old/ far as said above was really just new info fed in and the brain always wants new related+somewhat unrelated info that is fast to answer/ verify, hence increasing the knowledgebase learning up high faster. Of course we want to answer how to build AGI, it's hard, the brains stay locked on the hard problem, but they can get they in polynominal time by stepping the right directions down the butterfly tree effect like AlphaZero using self-play learning on advanced GPUs made by Nividea. So this means the brain wants to tackle the hard, evolution-installed goals of self preservation to death de part, and it takes baby steps as fast as can. Usually many want just a house and a lover, it's not hard to make a life, it's hard to advance evolution though. But these are the smarter ones, they are seeking full preservation, and its hard. And the brain allows you to get there fast using baby steps, even though it is one of the hardest problems. I say one of because AGI is simpler than a global ASI hive or omega structure nebula. We tackle AGI because it is the direct easiest step to become immortal near instantly. But its hard. But its the best weighed in route. So the brain is taking the best fastest route to get what, it wants. And in that process, it is taking sub steps that are fast to learn or build in R&D, to get to the AGI goal.
&
To sum up some of this, The short term attention guider decides which is fastest best choice task to solve (AGI) and then finds the needed and fastest sub steps and their choices repeat. Along the way, focused and and background attention Learning is needed to get there step by step fast. The brain is always trying to move the fastest path forward on ward like a wave in physics to achieve polynominal time length.

Learning requires more data, then you can answer questions using votes for multiple chess move decision technique. Data picked up along the way can help vote.

Yes video "What is consciousness ?" I agree, nice. Yep the experience of red or blue or seeing a flower recognized is just phyics particles and nothing is actually blue or pleasuring or painful or best or ugly etc, our brain is like a computer and just stores the memories in a shared hierarchy of representations stored as binary bits (at least, in pcs hehe) and can simulate any level of physics observable in a sim or AI. The brain can only 'say' it recognizes the flower, and same for seeing 'blue' and saying the word. Yes! All music/text/images etc even bodies are info bits info and can be stored as bits and all are meaningless without a decoding mechanism. You can say the decoding mechanism (as I said in my work) is the experiencer and even the experience itsef is different, as in the reward updating of next questions in your daydream thinking plnning and the system thattt experiences it is the intelligent machine and but my answer is the machine is just a deterministic machine even if has physics randomness injected and only says  it sees blue or has a plan and that is only a robot-experience ONLY and there is no actual experienceing anything as there is no souls and no evidence observations to conclude that. So ya I am a machine brain and I decode info and update goals for physics evolution.
I like what you say I have been there in these thoughts years ago haha! A consciousness machine can experience anything ex. added brain modules in far future with sensors and new thoughts and thought orginations and what I say I am doin. I.e. the TV can play any video. The human animal brain is flexibe you see, the neocortex can think about anything already basically. I am you in a sense... And when the TV shuts off I am not conscious (in dreams we are plase note). The brain consciousness still stays unchanged no matter what tragedy occurs - you still learn or do learned tasks. Some believe cons. can evolve or become higher but this is just experiences for the brain machine. Yes we are essentially all the same - only the structure and contents of all of many brains even in animals are the only difference, and even then we are all similar, so I AM you! Maybe. Sort of. We want to stay immortal or see our children stay behind, so we are a evolving magot as said earlier. We are only a mechanism that wants to persist in ways, you can remake me back as 27 clones or say you are me and that is really the same thing. Yes! we have sense, emotionREWARDS, and thinking. These are the I/O and rewards and higher level planning sequences of sentences that describe the real world and plans of action. And yes we will get new senssors, rewards (that one we already do in neocortex, new questions, answers become next question chain links), and new thoughts. And, new processors, more memory, mem structure. Etc.

Yes when people read articles their default stance abortion/pulling own plug etc is on stimulated and maybe the article will change their attentive vote beliefs to a new global equalibrium. Adding info and equalizes all knowledge programs in the same direction and easing everything!

Something I just read said: "Stories help us to organize information in a unique way,"
GPT-2 Transformer lookin good ;)

Engraving memories, their storage and their recall are naturally dependent on a complex set of processes. However, the 'neuronal drama' of such synchronized hippocampal bursts clearly points to their central role in memory formation and recall." That's right, the brain waves, they want to go one direction fast, i gives quick storrage, learning, recall recognition.

Attentin is a few things:
- Which sensory stream/s or thoughts or area of image to notice.
- Update goals using reward emotion neutransmitters. Transfer agenda to related Glove knowledge info
- Glove learning, and entailment task, paying attention to surrounding contexts and weiging in on likely answer of multiple choice selections.
- Searching for desired related info on internet/asking the agenda goal questions to self or others.
- "In a machine brain there will be a lot of automatic subroutines and contraption working automatically.  When these parts need to be optimized or adjusted, then the intelligence of the system will focus upon it (using a vote ststem attention, and will also use attention vote on the transle canidate for context compatibility or th entail canidate). The intelligence can focus on only one thing at a time. So automatic process must be judge by the intelligence, on how long they can be left on their own before the start to derail or destabilize."
- In sleep focus is gone, helps abtract global learning lo.

Wow nice woman is knowledggable in a video, here is my takeaway with my comments included:
Yes see it is correct as the neoron cells all send signal bursts firing and get feedback and synchronize and creates a brainwave pattern that oscilates and vibrates. It becomes a repeating cycle of rhythm. The organized electrical activity is strong enough to be detected on scalp by EEG. Ther are many types of brainwaves. Some have faster frequency of the firin nodes. Some louder. The higher the frequency the more you are awake. Slowest waves are high amplitude and are in deep sleep; Delta Waves. Theta waves are slightly faster and are during meditation or daydreaming. Alpha waves are when awake but relaxed like when your eyes are closed. Beta waves are even higher frequency and lower amplitude and occur when awake and are thinking about something. The smallest and fastest oscilators are Gamma waves and occur when deeply focused on something. Yes the synchronization allow the neurons to communicate with each other. Yes it helps you learn. Ah see! They say monkey got the answer wong when had theta waves, but not so using beta waves, i know why it is because the attention on a roll was low see! Gamma works with theta they say, yes, as I said earlier this too - local and global focus occcurs, the global does in subconscious routine background. The fast small waves also can happen when excited, cool! 

When you stop working on your work or game skill, it fades from short term memory by a few weeks. You must keep the attention memory filled in the hippocampus working mind memory. You can become very talented not just by repetitively learning lots of related (mostly) material but also by reading the same thing doing it again and 3rdly working on your questions/work non-stop. This increases your attention memory primed and revised using new and same information entering the brain.

Hmm.....neurons can get in sync.....there is more of the Alpha Waves if Alpha mindset in onsetting in the brain (are alert...at least in the attentive area you are in).....which has a frequency.......but there is no frequency in the brain......oh i see, the waves move faster eh?....so they can sync up AND move a x speed.....

low speed moving waves are less in sync, whereas faster waves can move fast and are focused and smarter speaking? Dreams are rambled, all messy. Goes off topic.

Hehe yeah its a lot combined ain't it. (1)NLP self talk (2)play as chain forward chess-like / evolutionary mutations (3)RL and ask others for help if need during R&D cycles.

perpetrual machines are realistic in a another take really......1stly you can store energy in a battery......and you can store it in motion in a flywheel for a pretty long time.......2ndly you can GAIN energy from a solar panel or heat if done correctly [enough intake]
what we want is a energy gain technique
so your really creating a solar panel for heat or wind
or to take nearby free flowing electrons if those exist
yeh if you get the wind to operate a pump,  u can compress air with it.
but you need to get the pump to move, and its only to the force you got it to,  and you only get high pressures with high force.
but i think if it could blow up an average balloon by itself, i reckon that could run the plumbing of a house.  but i dont know for sure yet.
sofi from Let's Enhance gave me x8 and x16 SuperResolution for my image hehe
for free
i just gotta get them from drive, give me 4 mins
brb...
ok
you need to have your power sorted with harvesting,   or its forking out for tonnes of lipo batteries..  which sounds more possible for an individual...
but u have to charge the batteries somehow anyway.
info gain
info compress
or
hehe
quantum!!
info loss
now i get it
lmao
info gain
thery r opposiste
perpetrual is gain
lossless is compressor
remember:
u cant get super high Resolution from the image
well, u can
superresolution works
GAIN
^
you get the gain parts from similars
weve only got 2 eardrums but we can hear all the directions around our head in a sphere!!  how?!?!?!
LOL
the aerlobes direct it onto eardrum cylinders
robots will have bowls as ealobes
the shape of the ear would have something to do with it?
not really
robot ears are like eyes in my opinion
you cant just make them with 2 microphones, its impossible.
you need it directionally sorted, or you cant tell who said what in the view. 🙂
info gain is data injection or energy injection
its insanely powerful
info loss is data loss/ compresion or energy loss/ compression aka battery/spring
!
the theory isnt a sure thing, so you need a plan b, id advise.
dropout see!
analogy!
dropin
info gain is SR and info loss is dropout compression encoding
while
energy info gain is solar harnessor and Mover and info energy loss is battery spring comprsseor staorage
spring encoder hehe
capacitor is a bed sheet, and a diode is an attached flap on a hole.
analogy!!!
I/O!!
latent space
embed!
layers!!!
stack!
you do the info loss to encode so you can GAIN o m g
u learn cat=dog....then u gain
you put the energy in the spring battery so you can gain then?
substitution!!
you GET electrons
you gain extra jumps!
it pulls energy out of the battery...recharge
MORE history span context
good one
get out of town
get indeopendant and out of the suck hole
out of the whirlpool draggin everyone down
if more context helps:
info gain is SR and info loss is dropout compression encoding
then
more context helps:
energy info gain is solar harnessor and Mover and info energy loss is battery spring comprsseor staorage
?
more context = more correct causation.
encode/decode
entailment=translation
you can get approximate translations like how you detect substitutions
i think
endless loop
fractal
gpt-2 language wizardy
machine inside the machine inside the machine
so, more contexts help battery spring storage whiole also helps energy harnessing in solar/wind/heat devices?
more context = more energy??
more context = more energy required
more context = more gigabit entaildata required. 🙂
some springs are hard to push down to max compression
but u get it all back on the return phase
lol
you can translate heat to light to motion to battery spring storage
encode it
them
lossless=no waste, all stored!
lossless encode, decode
I/O
if u lose your context, you never get it back
heat death
need full decription of output at the beginning all the way to the end.   outputs to output to output
lose context during transformation,  never get it back.
compressed can mean lossy loss, during zipping or during w2v
RAR=GLOVE
all loss, never get back
loss is nonreturnable because of ph theory.
even god cant do it
but i guess some loss is returnable, but its only to guess what it is
but if its a number plate - could be anything.
so more energy is required to push thick spring....more context is required for more accurate entail/translate.....more context is required to feed it (dataset).........more energy context is required to change heat to motion or battery spring storage embed
so summarize all this up i said
what did we learn
i'll write my own summary
Data and energy both can be compressed like a spring into a battery embedding capacitor, to allow for translating from one format to another format type. And both can be used to GAIN artificial data or energy by harnessing free context from surrounding data/energy - in any context aka environment. There is lossy and lossless.

We can see in the image "screencapture-towardsdatascience-network-of-networks-a-neural-symbolic-approach-to-inverse-graphics-acf3998ab3d-2019-09-09-22_03_28"
Hmm. So I have my christmass tree making building sequences of text. Temporal entailment hierarchy offf contexts. And as i know there was a Glove translator hierarchy, or, web. But the third is a made-of hierarchy as shown in this image, is this the Glove? Cat==dog? Uh, no? House is made OF walls etc....like building it up from atoms like letteers.....but instead of building sequences it builds objects, which can be similar to sequences because they can cointain flow dipiected innnn the image, u c. Now, he mentions capsuleencoder/decoder network, hmm, yes to store attrributes like size rotation location color brightness etc...so we got 3 types of nets here.....and they allow many abilities.....you can put the object capsule net under my word hierarchy to go from objects to sequences...you can feed it to go from sequence phrases to objects to visual render....and from render to object to phrase to say what is seen, it can style translate and segment parts as a parse either in the image orr text descriptor, lots of abilities check out his image. Wonder if they got this from me in these days hehe. Mayyybe, maybe not.
Even though a text/audio net can just say cars are made of wheels and whels are made of spokes and so on and use the Glove web transloator net to say things all never each ever heard, the lower object visual capsule network is still needed really because um this hierarchy shows you absolutely what is made of what and therefore what is similar, seems powerful to have.

Best chatbot? Mitsuki. It is just really good, and is similar to the incredible GPT-2 because it can generate answers it wasn't even given.
Besides that, GPT-2 is my evil secret hope, it can be a chatbot, and I place all bets on GPT-2 & Open AI because of what GPT-2 does and the architecture (the Transformer). I do have some beliefs about the grand model, but that is pretty much not far off or hidden. You get it.

If you think about it, any human can believe anything because the brain allows it through what evidence you have seen/felt. You could believe the sun is dark, that Mojoxiov is the creator of all farms like god, that milk is made inside aliens, and that flames are tasty. Just stare at a HD tv running a sim and you can believe boxes fall upward (simulation) and be fooled from birth in a VR world. A tv can play ANY movie and I can watch what you watched too! Who are you? Who have you become? Are you really one object or are you just a cluster of things? Uhh.... Machine nothingless. And the machine can't even know it, we are all dead. That's it I'm not even a person anymore. I can't confirm I'm here. But I guess the matter will still play out. Here I am indeed universe. Even though there are NO observers to witness it! Go study atoms. There's no souls!.......We are in a universe that works by physics, there is NO observers!!!! We do not exist, there is no point of the universe existing! We can't know we are here! We do, we are us, but only in the machine sense, no one actually knows we are here and nothing exists....but then the machines ponder as i am - there must then be an observer soul made from AI/ML....in that case the universe/us are just nothing and no observer but there is these observers that don't act, only feel whet we do.....in that case we have a universe that cannot feel ANYTHING u a hole BUT there is the sensor, the universe is the actor! We are just a hole machines that cant sense/feel.

There is no consciousness because we are just machines. But there may be a observer that sees what the machine does. Vibrations syncing up are only part of the intelligence, replication, learning, but not necessarily the soulObserver. The vibrations coming into sync are allowing information encoding/decoding and Learning like Glove and building like my christmass hierarchy viz and allow replication and swarm hive coordination. Even DNA must sync up and line up and duplicate, cells, humans, and cities duplicate, and cities are built from these lower parts. So we have a trend of things being made of things even just in a brain and the syncing in all 3 or just the brain allows for Learning and replication. Even ideas are updated for each member in swarm, and spread.


Could consciousness all come down to the way things vibrate?

Why is my awareness here, while yours is over there? Why is the universe split in two for each of us, into a subject and an infinity of objects? How is each of us our own center of experience, receiving information about the rest of the world out there? Why are some things conscious and others apparently not? Is a rat conscious? A gnat? A bacterium?

These questions are all aspects of the ancient “mind-body problem,” which asks, essentially: What is the relationship between mind and matter? It’s resisted a generally satisfying conclusion for thousands of years.

The mind-body problem enjoyed a major rebranding over the last two decades. Now it’s generally known as the “hard problem” of consciousness, after philosopher David Chalmers coined this term in a now classic paper and further explored it in his 1996 book, “The Conscious Mind: In Search of a Fundamental Theory.”

Chalmers thought the mind-body problem should be called “hard” in comparison to what, with tongue in cheek, he called the “easy” problems of neuroscience: How do neurons and the brain work at the physical level? Of course they’re not actually easy at all. But his point was that they’re relatively easy compared to the truly difficult problem of explaining how consciousness relates to matter.

Over the last decade, my colleague, University of California, Santa Barbara psychology professor Jonathan Schooler and I have developed what we call a “resonance theory of consciousness.” We suggest that resonance – another word for synchronized vibrations – is at the heart of not only human consciousness but also animal consciousness and of physical reality more generally. It sounds like something the hippies might have dreamed up – it’s all vibrations, man! – but stick with me.

How do things in nature – like flashing fireflies – spontaneously synchronize?
All about the vibrations
All things in our universe are constantly in motion, vibrating. Even objects that appear to be stationary are in fact vibrating, oscillating, resonating, at various frequencies. Resonance is a type of motion, characterized by oscillation between two states. And ultimately all matter is just vibrations of various underlying fields. As such, at every scale, all of nature vibrates.

Something interesting happens when different vibrating things come together: They will often start, after a little while, to vibrate together at the same frequency. They “sync up,” sometimes in ways that can seem mysterious. This is described as the phenomenon of spontaneous self-organization.

Mathematician Steven Strogatz provides various examples from physics, biology, chemistry and neuroscience to illustrate “sync” – his term for resonance – in his 2003 book “Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life,” including:

When fireflies of certain species come together in large gatherings, they start flashing in sync, in ways that can still seem a little mystifying.
Lasers are produced when photons of the same power and frequency sync up.
The moon’s rotation is exactly synced with its orbit around the Earth such that we always see the same face.
Examining resonance leads to potentially deep insights about the nature of consciousness and about the universe more generally.

External electrodes can record a brain’s activity. vasara/Shutterstock.com
Sync inside your skull
Neuroscientists have identified sync in their research, too. Large-scale neuron firing occurs in human brains at measurable frequencies, with mammalian consciousness thought to be commonly associated with various kinds of neuronal sync.

For example, German neurophysiologist Pascal Fries has explored the ways in which various electrical patterns sync in the brain to produce different types of human consciousness.

Fries focuses on gamma, beta and theta waves. These labels refer to the speed of electrical oscillations in the brain, measured by electrodes placed on the outside of the skull. Groups of neurons produce these oscillations as they use electrochemical impulses to communicate with each other. It’s the speed and voltage of these signals that, when averaged, produce EEG waves that can be measured at signature cycles per second.

Each type of synchronized activity is associated with certain types of brain function. artellia/Shutterstock.com
Gamma waves are associated with large-scale coordinated activities like perception, meditation or focused consciousness; beta with maximum brain activity or arousal; and theta with relaxation or daydreaming. These three wave types work together to produce, or at least facilitate, various types of human consciousness, according to Fries. But the exact relationship between electrical brain waves and consciousness is still very much up for debate.

Fries calls his concept “communication through coherence.” For him, it’s all about neuronal synchronization. Synchronization, in terms of shared electrical oscillation rates, allows for smooth communication between neurons and groups of neurons. Without this kind of synchronized coherence, inputs arrive at random phases of the neuron excitability cycle and are ineffective, or at least much less effective, in communication.

A resonance theory of consciousness
Our resonance theory builds upon the work of Fries and many others, with a broader approach that can help to explain not only human and mammalian consciousness, but also consciousness more broadly.

Based on the observed behavior of the entities that surround us, from electrons to atoms to molecules, to bacteria to mice, bats, rats, and on, we suggest that all things may be viewed as at least a little conscious. This sounds strange at first blush, but “panpsychism” – the view that all matter has some associated consciousness – is an increasingly accepted position with respect to the nature of consciousness.

The panpsychist argues that consciousness did not emerge at some point during evolution. Rather, it’s always associated with matter and vice versa – they’re two sides of the same coin. But the large majority of the mind associated with the various types of matter in our universe is extremely rudimentary. An electron or an atom, for example, enjoys just a tiny amount of consciousness. But as matter becomes more interconnected and rich, so does the mind, and vice versa, according to this way of thinking.

Biological organisms can quickly exchange information through various biophysical pathways, both electrical and electrochemical. Non-biological structures can only exchange information internally using heat/thermal pathways – much slower and far less rich in information in comparison. Living things leverage their speedier information flows into larger-scale consciousness than what would occur in similar-size things like boulders or piles of sand, for example. There’s much greater internal connection and thus far more “going on” in biological structures than in a boulder or a pile of sand.

Under our approach, boulders and piles of sand are “mere aggregates,” just collections of highly rudimentary conscious entities at the atomic or molecular level only. That’s in contrast to what happens in biological life forms where the combinations of these micro-conscious entities together create a higher level macro-conscious entity. For us, this combination process is the hallmark of biological life.

The central thesis of our approach is this: the particular linkages that allow for large-scale consciousness – like those humans and other mammals enjoy – result from a shared resonance among many smaller constituents. The speed of the resonant waves that are present is the limiting factor that determines the size of each conscious entity in each moment.

As a particular shared resonance expands to more and more constituents, the new conscious entity that results from this resonance and combination grows larger and more complex. So the shared resonance in a human brain that achieves gamma synchrony, for example, includes a far larger number of neurons and neuronal connections than is the case for beta or theta rhythms alone.

What about larger inter-organism resonance like the cloud of fireflies with their little lights flashing in sync? Researchers think their bioluminescent resonance arises due to internal biological oscillators that automatically result in each firefly syncing up with its neighbors.

Is this group of fireflies enjoying a higher level of group consciousness? Probably not, since we can explain the phenomenon without recourse to any intelligence or consciousness. But in biological structures with the right kind of information pathways and processing power, these tendencies toward self-organization can and often do produce larger-scale conscious entities.

Our resonance theory of consciousness attempts to provide a unified framework that includes neuroscience, as well as more fundamental questions of neurobiology and biophysics, and also the philosophy of mind. It gets to the heart of the differences that matter when it comes to consciousness and the evolution of physical systems.

It is all about vibrations, but it’s also about the type of vibrations and, most importantly, about shared vibrations.


Disagree, this whole post was about shared swarms.....it's ok! Just a error in the writer's task
What I meant was a brain can sync up and be intelligent, and ants can sync up too and do things from having a group. (I'm not talking about an observer though, we are only machines.)

If there is only machines, then the universe is as good as unexistant, we arn't even here, I can kill you. Nothing makes sense. Get me? Nothing happens.
But, there would have to be a observer then. And here the questions begain. Does a observer stay alive as I stay alive? Do they die each moment? Or each death? Or do they come back when i recognize another cycle? Or when someone is born? Or when the universe dies and restarts? Are all observers the same person? If we are really just machines, we can reason that the possible point of the universe would be then there is observer/s that are made, instances, that can occur after brain death (death or processing delay). But what creates an bserver? The things that stay alive from evolution right? yes. If observers did die and never came back, hmm....remember, all is machines, physics, there should be observers that can sense what we see/do, that have pleasure and pain as I do from info I/O, i know you feel this too but it is only you thinkng what you think, however it makes sense, this is the observer, the brain decides what to do....anyhow all is physics and the observer is meant to see the universe, to engulf and STAY, a point/purpose to have, over a period of time. These observers, observe, engulf, over a period of time. 


Our! genome! has! been! shaped by! individual! selection,!
which! has! tweaked! our! genes! in! such! a! way! as! to!
maximize!our!reproductive!success!as!individuals
• Our! genome! has! also! been! shaped! by! group! selection,!
which! has! tweaked! our! genes! in! such! a! way! as! to!
maximize!the!success!of!the!tribes we!belonged!to
What!makes!a!reproductively!successful!individual!is,!by!and!
large,! being! selfish! and! looking! out! for! oneus! own! genes! above!
those! of! others.! What! makes! a! successful! tribe is,! by! and! large,!
individual! tribe! members! who! are! willing! to! wtake! one! for! the!
teamw!and!put!the!tribe!first.
Purely!individual!selection!will!lead!to!animals!like!tigers!that!
are! solitary! and! selfish.! Purely! group! selection! will! lead! to! borgR
like! animals!like! ants,!in!which!individuality! takes! a! back! seat! to!
collective! success.!The!mix! of!individual! and! group! selection!will!
End of the Beginning
599
lead! to! animals! with! a! complex! balance! between! individualR
oriented! and! groupRoriented! motivations.
As!Wilson!points!out,!many!of!the traits!we!call!Evil!are!honed!by!
individual! selection;! and! many! of! the! trains! we! call! Good! are!
honed! by! group! selection.! Thatus! Internally! Conflicted! Human!
Nature,!Part!1.
Good!vs.!Evil!vs.!HierarchyAInduced!Constraints
These! points! of! Wilsonus! tie! in! with! general! aspects! of!
constraint! in! hierarchical! systems.! ! This! observation! provides! a!
different!way! of! phrasing! things! than!Wilsonus!language! of!Good!
vs.!Evil.!As!opposed! to! adopting! traditional!moral!labels, one!can!
frame!things!in!terms!of!the!tension!and!interplay!between
• adapting!to constraints!vs.!
• pushing!against!constraints!and!trying!to!get!beyond!them
In! the! context! of! social! constraints,! the! situation! is! that:!
individual! selection! (in! evolution)!would!lead! us! to! push! against!
social! constraints! to! seek! individual! wellRbeing;! whereas! group!
selection!would!lead!us!to!adapt!to!the!social!constraints!regardless!
of!our!individual!goals...
Much! great! (and! mediocre)! art! comes! from! pushing! against!
the! constraints! of! the! times! RR but! itus! critical! to! have! constraints!
there! to! push! against;! thatus! where! a! lot! of! the! creativity! comes!
from.! You! could! think! about! yoga! and! most! sports! similarly...!
youure!both!adapting!to!the!particularities!of!the!human!body;!and!
trying!to!push!the!body!beyond!its!normal!everydayRlife!limits...
From! the! point! of! view! of! the! tribe/society,! those! who! push!
against!the!constraints!too!much!can!get!branded!as!Evil!and!those!
who! conform! can! get! branded! as! Good.....! But! of! course,! among!
other!factors,!such!judgments!depend!on!what!level youure!looking!
at.!From!the!point!of!view!of!the!human!body,!the!cell!that!doesnut!
conform!to!the!system!will!likely!get!branded!as!Evil!(nonRself)!and!
eliminated! by! the! immune! system!! ! But! such! a! cell! does! not!
consider!itself!undesirable!at!all…
End of the Beginning
600
In!any!hierarchical!system,!from!the!perspective!of!entities!on!
level!N,! the!entities!on!level!N+1 impose!constraints!RR constraints!
that! restrict! the! freedom! of! the!level!N!entities!in! order! to!enable!
functionality! on! level! N+1;! but! also! have! potential! to! guide! the!
creativity! of! level! N! entities.! Stan! Saltheus! book! Evolving$
Hierarchical$ Systems (1985)! makes! this! point! wonderfully.! ! In!
some! cases,! like! the! human! body! vs.! its! cells,! the! higher! level! is!
dominant!and! the!creativity!of! the!lower!level!entities!is! therefore!
quite! limited.! In! the! case! of! human! society! vs.! its! members,! the!
question! of! whether! the! upper! or! lower! level! dominates! the!
dynamics!is!trickier,!leaving!more!room!for!creativity!on!the!part!of!
the!lower!level!entities! (humans),!but!also!making! the!lives!of! the!
lower!level!entities!more!diversely!complex.

Nice Ben! Yes! There is 3 types of nets, body, brain and tribe hives. A brain can be selfish and reproduce yes thats what genes evolve from and reproduce, the winners. Winners are also tribe society culture conformers that are seen as Good instead of evil and give up themselves for the tribe - a succesful hive, like an ant borg swarm that solo tigers. If you dno't conform though and are too different, you get eliminated like in immune system a unsimilar foreign cell does. Constraints are for all 3, you don't push boundraries you die off, but too far you get eliminated. And all 3 free for all too constraints, selfish is good but swarm is as needed.

Nice again! Yes tribes were efficient and maintainable, more honest and efficient. Civilaztion has many people playing many roles like modules and are unorganized and lie if not have long term connection. Humans were made for tribes with a even balance of self care and tribe care. And today face smiling etc are hidden by writing/internet messaging and can't see lies as good, requiring much defense and waste of resources. What is good for me may not benefit my country and vice versa. We have a different behaviour for home, school, grandma, etc, and lie. Tribe members will share and can easily difuse hate and take it out faster, even if must kill. "!The!fact!that!our!
culture!changes!so!much! faster! than!our!genomes,!means! that!we!
are!not! free! to!seek! the!optimal!balance!between!our!current!realR
life!Self and!Group!motivations,!consistent!with! the!actual!society!
we! are!living!in.! Instead!we!must!live!with!methods!of!balancing!
these!different!motivations,! that!were!honed!in! radically!different!
circumstances!than!the!ones!we!actually!reside!in!and!care!about.!"

Society is a net made of nets, like a brain hierarchy. As we can see, same for a single brain, all nodes start off random and must sync up for max efficiency. So these swarm waves at least in smallish groups like Open AI can be signs of end times. There is group immortality and self immortality, tigers try to stay live and rob, other types give up themselves for the army! ASI nanoot world will be full synchronized brain waves and still have self survival ways but they will be lesser and most efficient, as it can heal and has a huge army to care for each other.

Our!Internal!conflict!Spurs!Our!Creativity!and!Progress
The! conflicts! at! the! core! of! human! nature! are! frustrating,! at!
times! infuriating.! Indeed,! each! year,! they! drive! a! certain!
percentage!of!people!in!modern!society!to!suicide!(a!phenomenon!
apparently!much!less!common!in!tribal!cultures).!Yet!we!must!not!
ignore! their! corresponding! positive! aspects.! What! is! driving! us!
toward! the! reality! of! amazing! possibilities! like! flexible! brain! and!
body!modification!is!RR precisely!the!internal!conflict!Iuve!analyzed!
above.!
Itus!the!creative!tension!between!Self!and!Group!that!drove!us!
to! create! sophisticated! language! in! the! first! place.! ! One! of! the!
earliest!uses!of!language,! that!helped!it! to!grow!into! the!powerful!
tool!it!now!is,!was!surely!gossip (Dunbar,!1996)!RR which!is!mainly!
about!Self/Group!tensions.!
And! our! Self! and! Group! aspects! conspired! to! enable! us! to!
develop! sophisticated! tools.! Invention! of! new! tools! generally!
occurs! via! some!wacky!mind! off!in! the! corner! fiddling!with! stuff!
and! ignoring! everybody! else.! But,! we! do! much! better! than! other!
species!at!passing!our!ideas!about!new!tools!on!from!generation!to!
generation,! leveraging! language! and! our! rich! social! networking!
capability! RR which! is! what! allows! our! toolRsets! to! progressively!
End of the Beginning
606
improve!over!time.
The! birth! of! civilization! clearly! grew! from! the! same! tension.
Tribal! groups! that! set! up! farms! and! domesticated! animals,! in!
certain!ecological!situations,!ended!up!with!greater!survival!value!R
R and! thus! flourished! in! the! group! selection! competition.! But!
individuals,! seeking! the! best! for! themselves,! then! exploited! this!
new! situation! in! a! variety! of! complex! ways,! leading! to!
developments!like!markets,!arts,!schools!and!the!whole!gamut.!Not!
all! of! these! new! developments! were! actually! best! for! the! tribe! RR
some! of! the! ways! individuals! grew! to! exploit! the! new,! civilized!
group! dynamics! actually! were! bad! for! the! group.! But! then! the!
group! adapted,! and!got!more!complex! to!compensate.!Eventually!
this! led! to! twisted! sociodynamics! like! we! have! now! ...! with!
(post)modern! societies! that! reject! and! psychologically! torment!
their!individualistic!nonconformist!rebels,!yet!openly!rely!on!these!
same!rebels!for! the!ongoing!innovation!needed! to!compensate! the!
widespread!dissatisfaction!modernity!fosters.
And!the!creativity!spurred!by!burgeoning!self/group!tensions!
continues! and! blossoms! multifariously.! Privacy! issues! with!
Facebook! and! the! NSA...! the! rise! and! growth! and! fluctuation! of!
social!networks!in!general...! the! roles!of! anonymity! and!openness!
on! the! Net...! websites devoted! to! marital! infidelity...! issues!
regarding!sharing!of!scientific!data!on!the!Net!or!keeping!it!private!
in! labs...! patents...! agile! software! development...! open! source!
software!licenses!and!processes...!Bill!Gates!spending!the!first!part!
of!his!adult!life!making!money!and!the!second!part!giving!it!away.!!
The!harmonization!of!individual!and!group!motivations!remains!a!
huge! theme! of! our! world! explicitly,! and! is! even! more! important!
implicitly

Yes! Constraints that torture us force innovation else thered be no need. WW1/2 did this, industrialization. Ey al went ta work haha!

What matters is hierarchy, brain part of society or not, or just swarm society, the hierarchy is the idea. Hierarchy. A node made of smaller nodes...repeat, a shared hierarchy. And it learns entail, translation, segmentation, and which likely is the translation or entailment given a context, it can emerge, self-organize, and self-regenerate using self-attention physics. A oscillating vibration is what syncs up the nodes (even in human groups) so it is most efficient and fast. That creates a self-organizing syncronization effect. It allows emerging and repairing, due to self attention waves that want to spread and sync things up. That allows replication of ideas, DNA, and cells and humans. It is a evolutionary genetic AlphaZero tree branch chess game theory. Again: In short, a hierarchy is made of shared smaller parts, emerging, replication, repair, and persisting come from wave syncing that force Glove like web Learning AND Self-Attention voting like in Transformers and my replica and duplication to occur and correct errors (as in Glove, or advanced Glove as in my web, and as in injecting Next Words or translatees) if you will. Mutations happen, likely good ones, and we go down the branches efficiently.


I tried GPT-2 774M model and got this interesting passage:

The brain waves sync together and make deeper synchronization possible. In terms of the frequency that you can achieve, you would have synchronous signals that are about twice as high as the current standard for a brain wave synchrony, so a higher frequency can go a long way.

The difference in the signals from brain to brain is what matters to the brain, not the frequency. And in fact, the brain has a very simple method of detecting the frequency, it can see it in its brain waves, but it is actually a much, much more complicated and subtle process than detecting the frequency. You can have a different frequency in brain waves.

But for syncing together the brain waves from the brain to brain in a sync-clamp, there are two important things in effect. One is something called the amplitude shift. And the first thing you'll notice is, when you increase your frequency, it actually reduces the amplitude of the brain signal, because it increases the frequency of the brain signal instead of decreasing it. You're actually seeing a shift from the brain wave to what is called the wavelet, or


The goals in the AGI's mind will be similar to ours, I'll make sure at least. Food nutrients and sexual reproduction. And the higher level rewards - immortality, computers, etc. It will wake up / learn & progress with these goals NOT fulfilled. It will feel the pain, and be in a rush. But in a subtle, optimized way that is faster and more sure, no tears, just choices. It can be happier than us by VR rewards, but it will need to still work on its goals, which are not fulfilled until we are immortal. We all in the meantime can be fed etc by our overlords, but immortality will still be unfulfilled. But it won't take them long.
Don't worry. Humans and dogs feel more pain and death than any AI could, everyday. Just make AGI and STAY CALM.
Immortality is a trend in evolution, if you think it is not, you haven't got far in your research my friend ;)
Sexual reproduction is like asexual reproduction, it is a unit, you are not a person like you think, you are a evolving mass of shit as a molecule in a larger group, just a gear in a clockwork.

I! don’t! imagine! that! introspection! and! selfRmodification! will!
eliminate! all! tensions.! I! suppose! that,! long! after! humans! have!
transcended! their! legacy! bodies! and! psychologies,! the! tension!
between!Self!and!Group!will!remain!in!some! form.!Even!if!we!all!
turn! into! mindplexes! (Goertzel,! 2003)! – tightly! knit! networks! of!
minds!that!have!reflective!consciousness!at!the!multipleRmind!level!
as!well!as!the!individualRmind!level!RR the!basic!tension!that!exists!
between! different! levels! in! any! hierarchical! system! will! still! be!
there. But! at!least,!if!itus!internally! conflicted,!it!will! be!internally!
conflicted! in! more diverse! and! fascinating! ways!!

I!have!noted!above!that,!in!current!human!nature,!conflict!also!
spawns! creativity.! Does! this! mean! that,! if! future! minds! largely!
overcome!the!conflicts!at!the!center!of!current!human!nature,!they!
will!also!overcome!the!need!to!create?!!This!will!surely!happen!in!
some! cases,! just! as! when! the! tormented! but! productive! young!
artist! settles! down! to! a! happier,! more! mature! life! and! stops!
creating! much.!
again Ben says it
as in WW2, torture got way to industrilization, no pain NO GAIN
war=GET YOUR BUT MOVIN SOLDIER
sittin pretty u get no motivation!!
see those humans hit those animals with stick?
yeah, education, enforcement, training!
the need to create is because u dont got!

elatively! free! of! Freudian! repression! and!
individual/group!conflicts!– while!still!avidly!pursuing!the!deeper!
goals! of! gaining! more! and! more! information,! and! creating! more!
and!more!structures!and!patterns!in!the!universe.!

lots Ben says above is useless
then theere is gold
god is the higher node - the society you live in
me = great motivation

i hate searching google for stuff like it
i'd hire a mentor
for coding plus gpt2
sucks that one day yu can realize you need python, not c++
or need motors, and thinker
then i'm out again
cus i dont do bodies

gonna all come down to 2 layers
simple AGI program
after ALL that
simple physics trick in gpt2 i say

people hang out and attract to like-minded friends
cus they BELONG TOGETHER
cat dog

were nothing without friends!
just as my brain is nothing without the highest nodes
power=microsoft corp
look at open ai team of 100
they dshare in real time in franscico , and we dont share yet, stupid us

The more data the more powerful the nanobots will be. And branch chess look aheads decision narrowdowning is computatious, they will have compute!!!
Dude, the probability liklihood narrowdown distribution is for all tasks; segmentation, translation recognition, and entailment. And EVERYTHING is language models.

GPT-2 can tell you how likely a word is in English sentence

Ideed Attention is association, an entailment litterally IS a = translation as Glove even thinks so. And when we vote on Next Words, related Glove translates are used. So Association is Attention; entailment!
One YouTuber says but GPT-2 can't predict what a friend may say he knows as a friend etc. Well, it can. If it has more senses, it can use vision of where it stands to vote on Next Words. If they hear voice y they will have that vote on Next Words. Ex. m mom>God.

Amazing!!
The energy doesn't want to stay in the shorter hanging ball as long plus wants to stay in it longer when different heights because the oscillations are faster and requires less energy at the same time.
And, then energy transfers to a nearby location by oscillations.
SO:
Higher frequency=faster transfer and less energy required. (more efficient). However the structure helps this - the shorter string. The stronger the connection - the easier the energy can work its way. Long string=big resistance.

I agree, all hardware, hardware simulation (K's), etc are just running the algorithm. The algorithm can be anything. Only then do you pick the best hardware and simulate/utilize a hardware. I believe K's virtual hardware is really the AGI algorithm - not hardware - at the same time he is using his hardware to max efficiency. So he really has an algorithm + hardware tool to RUN it. The huge web in K's AGI is not hardware, it learns using a lot of different data, this is relational resolving/cycling, and only uses a (good) hardware - effectively.
Currently my theory is really far and I don't know how farther it can go. I still don't understand GPT-2 yet I have remade it in my own way and understand what it does fully. But the implementation/ hardware they do I simply am lost. It takes so long on so many GPUs to train and then can run on my pc with little overhead. One could explain the training and/or the trained net. It will take a rethinking how to teach lowkey quickly, clearly no one is doing it correctly, else I would understand quickly. If one cannot explain, they do not know their own work :)

I agree Keg. This is a very interesting topic. We are poachers, taking others woork and making it look shiny new, repeat. All humans share information on Earth and are a collective hive/ hierarchy, just like a brain. Relationships do get formed and broken. Synchronization occurs too. However money is useless today, everyone hiding their AGI is doing no good, because these connections are related and should be opened, there is no untrust to worry about, only gain. Keeping informational connections shut closed is not helping, we are close to the singularity. We should be most honest and sharing.

Unsupervised learning and generative models + RL - yes you need both, RL is for language, and all AGI is is language. As said before long ago.

learning from small data......LSTM......transfer knowledge to smaller ANN.......um
2019 june
UM.....GPT-2....big data!!!!
idiots at deepmind arnt even focused SEE
look at that ^
thats old stuff
and why work on adversialness when it is the alg that sucks - a bus doesn't look like a ostrech
the net must be like humans, a bus lights up bus, no match unnotiable deformaties!!!!
i like how they say AI is the Papers's math/code yet all I see usually is wacky Paper terms in language. There you go, all the AI work is language, and here they say you need math ad code. Nope, there papers are full of BS writing/vizs. Even the math is language, even code - GPT-2 can write it!
they say you can duplicate the alg from a Paper

pakistaians are scammers and robbers and also hard workers
because they have hard lives and are starving.
that's why they own all the corner and resturant stores
everywhere actually
doctors too
my freelancer works without a contact for any 5USD he can get
he is helping me
for free
while, Americans want that 30USD per/hr and the contact MUST be made....these ones are also starving and ready to slave for you
the students however, are nice to me but dont want to work at all

animals, woman, and black people were abused more, and like small people try to act tough and may bite you like dogs lol. They have better lifes indeed women do - baby caring, home stay at, etc, the joys of life with no hard hunting. Yet they can be abused because weaker. However men have harder life, more power, and can get greater life, but it may seem really good and bad sometimes, women are more neutral and get not so great positive/negative, however I feel the more powered man intelligence species is having a harder life than girls, so girls have high life with no work?

hive collective we all share info
we do it already
others do get torn apart and die the info away gone missing
all info is good and shared
some dies off
we all work n our own package of info
we all share it
we are all just people
and then it dies
the poorer info dies
rick text goes
lol
rich*
own info is good
it is a canidate!
all others peoples are canidates too!
we share, already!
then only good gets 'chosen'

See a paper says it too "leveraging past experience to speed up new learning"

SEE! MY GPT-2 REPLICA USES BAYESIAN STATISTICS!
Physicians intuitively use Bayesian statistics on a daily, if not hourly, basis. Here’s why:
When a patient presents with a symptom, such as chest pain, the physician considers the possible causes (etiology) of that symptom in a rank-ordered list, from most likely to least likely. This rank-ordered list is referred to as a differential diagnosis for the presenting symptom(s).
The physician then asks a series of probing questions meant to re-rank that list of potential etiologies, such as “do you experience chest pain primarily when climbing stairs or exercising?” The answer to each successive question re-ranks and in general narrows the list of candidate etiologies.
The re-ranking of diagnoses based on each successive question asked by the physician is premised on the predictive power of multiple discrete facts over the predictive power of fewer facts.
Inference engines that incorporate Bayesian statistics include backwards-chaining (inferential) expert systems, and are among the examples dating back to the 1970s of the application of artificial intelligence methods to medicine.
IN MY GPT-2 REPLICA, THE NEXT WORD HAS A FREQUENCY RANKING, AND THE RELATIONESS FROM GLOVE OVERRIDES IT MUCH HARDER AND CORRECTS IT BETTER.

I say being formal is really a waste of time, this idea was so simple all along. Confusion is actually bad explanations.

Let's track Transformers's progress. The singularity is now right around the corner. The information hive is synchronizing and big 4B dollar relationship connections in internet channels are being formed! So are these large language models. It's all about information aka language. Output it and intake it. And sharing. GENERATIVE MODELS!!!!
https://devblogs.nvidia.com/training-bert-with-gpus/
https://openai.com/blog/gpt-2-6-month-follow-up/
https://openai.com/blog/microsoft/
https://openai.com/blog/sparse-transformer/
https://openai.com/blog/better-language-models/
https://openai.com/blog/musenet/
https://openai.com/blog/learning-dexterity/
https://ai.googleblog.com/2019/09/learning-cross-modal-temporal.html
https://openai.com/blog/emergent-tool-use/

I'm sure GPT-2 can answer what entails ;). Nividea is onto chatbots for health etc already. We've already been testing GPT-2. GPT-2 can do it. Rain>slip. Look:

I went out into the rain to get my car key
. As I was walking up the ramp on the opposite side of the gate, I hit my head on a railing, causing a concussion. The next thing I know, I'm being carried from the gate on my stomach. My glasses and belt come off the gate. I manage to grab a beer and pull myself to the side of the building. In the ambulance, I was told I had a broken nose and a brain injury.

The doctors told me the damage was likely to last up to a year. So I'll need to be in rehab for a while. My recovery was difficult to deal with. I was unable to do much.

While in recovery — the days got worse, I couldn't go anywhere without getting a headache. I tried to go to the gym and play football with the guys around me but I couldn't because I couldn't get up. I'm not too tough to have a cold, but it became a recurring soreness that persisted.


Replying to John R (funny)
I see. Higher intelligence is higher symbolic energy-transfer protocols. Deeper hierarchy is deeper energy chaos synenergy. And brains share via the web and are nodes in a larger plot.

To answer your question, yes, the clicking ratchet energy transfers as entropy to nearby matter and self-organization occurs automatically without anything stopping it. So we're all going to get to the finish line, no matter what. But we may get there in pieces.


There's 3 hierarchy types? Yes, cells support the system life basically, brain net hierarchy is the intelligence. And the hive global brain hierarchy is a coordination of many workers, because 1 giant worker is slower, but the workers WILL be almost like they are 1 after all, you see. Each of the 3 hierarchies can be as deep as you like, deeper=more powerfully Governing, like a hive country gov god.

Some countries are full of disorganized people that kill one another. Some have different frequencies. The waves don't propagate fast because the channels are shut and messy.


Dimension reduction > kick out some politicians > less dimensionality.
Alternatively > put a politician in position of a gardener so he is higher than him while also similar like him. 300D to 50D w2v
As we can see, more nodes mean more synchnization is needed but also mean stronger waves are achievable! Yes! Cool! DropOut can help schhonization. You have a army and is fast/powerful cus you have many potential strength stacked but you need a lot of convincing to get them to all, do, what you, want. And if you remove the leader or governor in the way of a law you could make a breakthrough and get the wave to rush out of the dam. And actually, more dimensions allow for more accurate relatedness knowledge, dropout makes it efficient yes, there may be uneeded items/people. Talking about army sized waves, evolution seems to be exponentially progressing, because more food & replication is being done, and that happens only if there is a larger amount of competition & cooperation i/e. more replication = more replication! Remember, this is info replication:cells, bodies, info, groups. Or bodies have already for humankind, but our info and teams are growing in size and strength-relationships. Yes a deep NN can intake all this data and churn out more powerful data that we want.

One day our omega structure may face a evil omega structure full of hate and demolition. It would be a even matched fight between the 2 biggest god giants ever. Full destruction and regeneration instantly would ensure. Can they even die or kill? The issue is seeing it approaching before it reaches you. The one fear is a bomb tracker. It just hits you. The only possibility is you can grow so large the odds is more unlikely.


#1 Ok so they can move and pick which objects to LIKELY go to.
#2 They start off randomly moving. True to my name and my baby experiment.
#3 Aha they use Transformer ideas yes yes!!! Now all they need is transformer based simulated motors.
#4 Objects in its sight are payed attention too.
#5 Self-play is key in larger complex 'environments'.
#6 Long term evaluation in RL is hard.
#7 Many different tasks can be learned by RL supervision. And more transferable representations are needed.
#8 Self-emergent complex cooperation and competition using self play robotics language RL (THE COMPETITION SELF-PLAY IS A SELF-ATTENTION).
#9 As they say in the video, multi-agents play competition and cooperation survival thousands of rounds of self play against others and past selves in parallel occurred on Earth using Hide & Seek (uhm, ya, sounds more friendly put that way) and the champions (as shown in their video at the end) are the updates.

Wait, this is motors, simply they skipped hands/running like atoms and just simulate/learn bubbles at a higher level! What happens is winners duplicate faster and find food faster, losers don't, the winners face off past, current, and other domain creatures. The self play rounds allow it to learn higher level features, this works on its own. But. To pass 1 step in this Game Theory, it must learn, they start random, pay attention to objects/movements, they get reward for hiding/seeking, long term reward is needed still, and yes they have big trial space but they still learn but I bet in larger space complexity they will need language, and lastly the actual hints the agents use in this lower dimensional space to figure out these behavior items is um, ok so they start off randomly moving and they pay attention to objects/motions and instead of the related words voting on the next likeliest word we have related objects voting on next likeliest actions. however it does look like they track right-to the object and know where to bring it, as if they tried millions of runs (which they did actually). It's as if they are reusing behaviors they learnt already. Can someone explain their amazing behaviors? Why do they run to the correct object etc, its too perfect, as if the ran all possible outcomes, except that wasn't the point and probably not what happened.

It seems like all they do is turn and move forward/backward and grab object. So, all they must learn is sort-of where to go and to choose correct object when see it.....maybe too simple? That'd explain such wonderful behavior, because its such simple actions to learn (there's many correct ways to act).

So the takeway here (not already mentioned i mean) is that looking back/forth far enough is important, to skip lower layers and focus on high end task learning, use a self-attentive GAN R&D feedback competition self-play using many agents in parallel, and use Transformers FOR each step of THAT self play.

There is 3 net hierarchy types, Glove-like relations, my christmas sentence/plan sequence hierarchy, and visual node capsule net of parts made of parts. There is working memory and motor and other senses but no other forms of memory I say.

Ben Goerztel's project is like mine cool! Omg!! See images in folder. Yes he has forward/backwwards logical reasoning chainer that uses evolutionary steps to ind a target solution. He says it can do high level logical sentence if-then stuff and low level ranking the nodes. Don't forgeet Ben, the question it begins with usually is its desired installed Questions for food/sex/immortality ni the end long run, and, Glove etc learning and my GPT-2 voting. Etc. Ben seemed to say reward spreads in his net too, like my idea!? The chainer sure is similar either way. I'd like to see his algorithm results. He even mentions both hierarchy and heterarchical, yes, there are two types. Glove is the second. He mentions self-and-other-modelling, yes. "including perception, action, cognition, learning, memory, creativity, socialization, language" is all just action-language with temperature and multichoice selection and mixing (segment, translate, entail...). The learning is Glove etc. Attention memory ya ya it is in GPT-2. Asking self and other questions is the social R&D steps/next resorts.

There is 2 net types, no episodic memory etc. There is entail translate segment tasks. The main idea is chain evolution RL language THOUGHT using vision andor text, the ex. vision can be a detail or guide and vice versa, or both used as voters.


9/10 patterns in English language? Nah. First of all, the first plus second link just says basically the following (II'll make p my own to cover em all): cats are dogs, cats are good, cats are above me, cats were above me, cats were in the library, cats entail chase, cats is plural, cats were eating, cats are eating, cats seem cool, cats remain cool, cats remained cool, cats are mostly cool, cats eat food, cats are food, cats buy food, cats lick/push food, mom bought her a dress. And text already has references (it/dogs='?'), entailing, segmentation. Natural text has many words (the cat was eating all her romantic food left on that plate of gold), removing any of them takes away data, you just model the data, you don't turn it into [cat-ate-food] triples.

This is all a=b for is/where/when/segmentation!

Let's take:
The cat was walking fast down the road to the store and went into the store slowly going down each isle looking out for the cat nip food it always ever wanted so much for itself and friends but unfortunately it never found it and it died in the store.

cat=walking=down......you could literally say every single word = each other! cat=fast/was/etc! The segmentation separates the parts. And any word in the sentence can = something ex. 'it'='the cat'. As said, the entails are al-ready cats are dogs '='s. So here you have entailment, translation, and segmentation. The entailment can say cats=here/dogs/time/segmentation (as Glove thinks cat ate food implies cats=ate and cats=food since seen near each other. And a object near object is an visual image language is also entailment and is in fact even 'where', so in images you have is/whereentails/segmentation/time.

So in summary, in text, you have entail/segmentation/translation, and the entailment can say what entails (i.e. entailment/location, similars/time, segmentation).


I have some proof for the following discoveries. I noticed in my work regeneration is really a thing. Everywhere. Which means cell/body [self]-regeneration can be induced higher. Physics had the cell emerge, bodies emerge, and they repair themselves. GPT-2 (that amazing text generator) repairs sentences - "but it is tasty" becomes "but fries are tasty". And GPT-2 adds on word by word and (could) inject words between sentences ("and but fries are really tasty these days"), hence emerging+repairing you/the data while alive (or, dead). Cities repair themselves. Wounds do. Cells/molecules Self-Organize together on their own and repair using Self-Attention physics (see link below), hence explaining how they not only emerge so easily, but also repair against all daily damage!! This is seen in all AI tasks, image fill-in, etc, its all data entailment/translation - they use past experience to answer new desired questions. Physics makes evolution happen - faster towards the end because there is more replication/ food finding and more replicators/nodes or 'complexity' to synchronize - and yes brainwaves/groups of humans sync together and form neural/friendship connections and an army will move faster more powerfully once aligned (see link 2). This speed up near the end of evolution is just raw immortality, physics makes us emerge and self-repair and persist. And the universe seems to take+give - black holes if exist give off matter, not just take it in - gravity makes planets stars, yet, too big and they burst from nuclear fusion like the sun and gives off heat.

Link1 - Self-Attention aka Self-Regeneration for translation and for deciding Next Word to add to the user's story (they are Generative Models in AI):
https://www.google.com/search?tbm=isch&sxsrf=ACYBGNRcK34CouZPS4BnZa0bfNVYBULlkQ:1568966138445&q=self+attention&spell=1&sa=X&ved=0ahUKEwjdyOKq9t7kAhUL5awKHWk8DOkQBQg7KAA&biw=1280&bih=923&dpr=1#imgrc=og8Ip4ErUGB64M: 
Transformers use this

Link2 - synchronization seen in physics using metronomes - not just brainwaves and human groups.
https://www.youtube.com/watch?v=5v5eBf2KwF8  

The brainwaves are fast because sentences are made of words made of word parts made of letters, hierarchies! And an algorithm called Glove learns the word dog is similar to the word cat and can learn new relations by storing the ones learnt, meaning it can go deeper in the neural network and you can play the piano as like walking no problem because it has built strong bridges of beliefs that allows you to not only build more on top but use the smaller parts if need or use the parts applied to daily tasks (but adaptingly/fit for task, and as task is done it will adapt i.e. hold lawnmower but you've only ever have held a broom).

Physics/evolution does for sure have Self-Orginization of stuff in GPT-2 and cells/bodies/city groups - yes, indeed, they emerge, and even repair. But do they both use Self-Attention physics? Well, they both emerge plus repair using physics evolution, and AI tech uses Self-Attention, while cells/bodies/cities use um, each other if one dies, indeed we fill in gaps or replace them and chain-on, self-attention votes for whichhh to add on or whichhhhh to replace with which ex. 'the cat ate the food' becomes 'the dog ate the food'. Wounds do this, cells take food and replace/repair or duplicate and chain/inject to make arms longer or replace dead cells. The emergentness is also chain links, and probably filling-in also and the duplication is cloning of a string and this uses the first string to clearly attract the similar/same ones TO duplicate the data.

"Evolution has produced a multi-scale mosaic of interacting adaptive units. Innovations arise when perturbations push parts of the system away from stable equilibria into new regimes where previously well-adapted solutions no longer work. Here we explore the hypothesis that multi-agent systems sometimes display intrinsic dynamics arising from competition and cooperation that provide a naturally emergent curriculum, which we term an autocurriculum. The solution of one social task often begets new social tasks, continually generating novel challenges, and thereby promoting innovation.
Under certain conditions these challenges may become increasingly complex over time, demanding that agents accumulate ever more innovations.
"
This here says replicated cells/brains/info are trying to eat each other and persist and they all fight but also cooperate TOO and this is the schyhonization self-play. They interact and do the Glove thing to schyhnoize, like Open AI's Hide & Seek game theory project made they do the GAN theory and up each other as competition to eat each other faster, hence exponential evolution. So they up each other and get faster/better at duplicating and sync cooperate and this self-play is a body-body self-attention adaption, which the brain internal net self-attention is a non synced at first brain-wave-less mess that aligns and does the ex. Glove thing and uses self-attention and can adapt for task ex. motor lifting. When new people or data enters it must sync, and this means trouble, they may fight at different frequencies, they clash!, and shed reward (parent-child or Glove food=money) updating the other node! This leads to innovation, like in WW2 when there was a lot of war after the tech appeared, and makes the lower level nodes have to rearrange or invent new things or become new features.
Amazing that Open AI even made the GPT-2 PLUS the Hide & Seek! They even used both in the Hide & Seek!!! That is; pays attention to which objects/actions to vote on, and the 2nd self-play self-attention is the GAN thing with each upping each other progressing more complex behavior to equlize the energy waves, and it uses competition/cooperation notice there is 2 similar blue men and 2 similar red men - they end up equalizeing as a brain wave so ALL 4 are in sync really, and yes they are the eaters & runners, both can be in fact.

In AI, these is points wheere you get stumped and need more data, this is synchronization issue not at equilibrium - localized and need to use outside info / do global wave synchonization.
As they say, you can discover answers like in learning to walk or internal self talk answering your questions to gain knowledge about the world, but to get farther you need what they do in the Hide & Seek project Open AI made. And outside info, which is what they call  new environments, suggesting you alsooo need competition (and team mates) agents in a multi-agent world. This is indeed the evolutionary idea i mentioned before - have eaters eat each other and eliminate and mutate the breeded of the champions, however as I said too - it is too computatious and Open AI's has few objects, so, in that case - you need to work with language instead of training in real world. I always use my vision to talk to myself, with the help of audio/text. Their Hide & Seek self-play works, but i think not on a extremely higher scale to the point the build REAL rockets or tell us how to build nanobots etc. You'd have to put agents in a small box meant for nanobot creation where they have material and must use actuators to build. Then you'd need another setup for controlling them however you still need AGI to control them and think inside them and do R&D competition, where they begin with desires and try ways to get them ex. food from HDD creation for pcs but must make a faster HDD than competition or else starve. Here, the AGIs try to make their HDD faster using new ideas from competition and/or their own ideas, and this is just a idea evolution using new info to help, aha, the self-play is that there but for actual actions for eaters it evolves running/baracading/etc but not too precise things, nope, we sit and think and use our minds sim language cus its faster in such a complex word...yes you can focus on not the atoms or keys and just the keyboard or the rocket booster but then why learn to build a cylinder and place it there when there is no imediate reward except the idea of what its for, see? So much possibilities and so much long delay reward, i mean you only know theres reward by the base idea of do x to try to see it move upward. Anyway randomly using feature program of motor actions to build a rocket never happened from humans (or a single human let's say) by just puttin it tagether, they had impressioned idea plans first before goin in of what likely to try.

I see exactly where all this is going. It's moving fast. We may be out of here before we can blink. I'm beginning reading their paper and the autocurriculum they gave (see below). The data and the compute will grow soon (yes, bigger computers, millions of times the amount of cpu/etc). We may have to team up with our strong solo potential desires to be a wave with the other big player groups in AI such as the 100 team members in Open AI. Googlebrain works with them too. We feed from them too, but not back, nor do they chat or work with us. People give money to larger teams. Open AI is also rich, too, not just many members.
https://arxiv.org/pdf/1903.00742.pdf

That one line :))))
"The truly intelligent adaptive units are massive multifaceted multi-agent systems that, while being composed of humans (as humans are composed of cells), have their own internal logic and computational mechanisms different from those of individual humans."
See it says groups are more powerful than human, human is more powerful than cell.

Evolution is about not not resisting mating, its about who (the stud, in dog terminology) mates and finds food and breeds most, fastest, with many women, and taking care of children too. Then we get more of these ones the non breeders die out, that's why we got installed rewards for food & sex finding. Thhen these mutants breed, they learn RL as grown competingly, then these grow. But to find the food/sex requires a way to skip around language using a sim - thoughts.

Yes see that paper says cells, humans, and group are all hierarchies that build on top each other and all are in fact a single big hierarchy, including the other few animals in the animal kingdom.

Interesting, yes, more and more cooperation is occuring these days and this is making and building new connections - it is building a taller hierarchy or heterarchy like Glove, this is in groups of humans, neurons, and cell molecules/DNA. Thus the wave can propagate faster when there is no +/- fighting and all are aligned in the net (and, +/- true/false beliefs, too), whereas the brain starts off as random mess all different wave frequencies.

More from the paper (they are similar to my discoveries omg cool!) > Both evolution and agent RL co-evolves both helping each other (defects in babies, enhancing next generation for math/etc). Agents are faster at innovating. This is because the growing up of the humans takes long, what we do all day then is just RL competing/cooperating against a goal (really it is just cooperating against a problem, as I do battle my question in my brain using vision language all day).

i wonder how far their hide & seek can go
i bet it cant go to scale-y
why? because there's more to the brain than just that!
its only good for small stuff, it's low level....walking....baracading....cooperating....hmm...
ok fine
it can do high level stuffz
but it can't do it in a big world, their world was realllllllly small, and needs other brainy functions
so can brute force of all algorithms! it can do high level stuff using tooooooooo much energy though
so can Hide & Seek by Open AI....big stuff....too much energy cry me a river too bad!!
although very impressive, the issue is there's more to the brain than just that. Brute force can give you high-level behaviour TOO. But as with this, it's NOT gonna work the magic 100% on its own because its too much energy. Say you have animals that eat/run and breed, you COULD get one to do anything, a rocket out lol, or guns, but its too much energy. And yeah the reactions etc would be outputed as cold pre-made by the brute force, they wouldn't actually run WHEN see a preditor. But wait, don't these Hide & Seekers react in real time to each other? Hmm....but the way they react is stone hard, it just backpropagates and stores it!!!! So I'm right, too much energy, it's still brute forcing...we use this type, even full brute force, but not in the most complex possibilities problems nope, too many combos to try.
So it is like a brute force training where they are updated (static now) and react deterministically to win against the new enemy approach? It's just brute force on kid-steroids then. This alone isn't the brain, its still too much energy to make them build a FULL rocket in real life.
its still brute force i mean
its good though, but not allll we need
however it opens the door to a faster brute force!!!
its inbetween....its not pure brute force, nor AGI, but inbetween
Is it so that this algorithm is training the net/s so it will determinstically react to objects/actions/others and top the competitors? So it is just brute forcing the backprop that is needed (static now once gets the correct backprop update), then plays it out and they win (get 1+ reward)??? So its like a brute force then???
Yep see we go from brute to lesser brute force. AGI is just a smart brute force (that can use other brute methods). Language sim internally IS needed!!!!!!!
I mean these little blue and red men are updating their way to react step by step passes clearly, one tops the other, then its their turn. It's brute force but smart-er.
Nice the paper mentions the gpt2 flexiblnessly reacting as act out the stategy, the innovation of the stategy making (the hide & go seek brute force), and that otherwise without that self-play it would get stuck in local optima forever.
I like how they mix GAN+Transformer+RL+sim
"However, in designing self-play algorithms, care must be taken to prevent forgetting of past
policies. If old policies are forgotten, then a newer generation may become unable to defeat an older
generation, creating an opening for long-extinct traits to re-appear in the future (Samothrakis et al.,
2013). Thus forgetting may cause the induced autocurriculum to double back onto itself, just like it
does in the rock-paper-scissors case, preventing the productive accumulation of new innovations.
In practice, successful self-play algorithms generally play not just against the latest (and strongest)
policy, but also against as large and diverse as possible a set of older policies (Lanctot et al., 2017)"
Sounds like LSTM days, backing onto itself "the cat ate the baseball flew so fast dude is a word bro no don't step past and future". It must remember lower level features however, it is sorta different, they got it wrong. However yes for preditors you'd want weaker versions too, bacteria etc haha!! Maybe that's why we got al forms of trouble!!
"AlphaGo and its Chess/Shogi/Go-playing
variant, AlphaZero (Silver et al., 2016, 2017, 2018) are based on self-play. Starting with adaptive
policy and value estimators for game positions, they use Monte Carlo tree search to improve the
current policy, then learn better estimators from the generated games. Interestingly, these algorithms
show that some kinds of forgetting are not always harmful. Sometimes innovations are discovered
at one point in the training process, and later on discarded in favor of others. AlphaGo Zero, for
example, rediscovered several patterns known from human Go expertise called joseki, but some of
them were discarded in favour of new variations later on in training (Silver et al., 2017). A similar
phenomenon was observed in AlphaZero: its preferences towards certain Chess openings fluctuated
in time; the most frequently played openings at certain intermediate points in training were no
longer seen in its later stages (Silver et al., 2018). Presumably the algorithm discovered that the
discarded strategies are suboptimal. Sometimes it goes beyond our human capacity to understand
what it has found. For instance, AlphaZero, in Chess, makes surprising (to humans) sacrifices to
gain positional advantage, a development that is now impacting human professional level Chess
(Kasparov, 2018)."

They could try this on many other environments....like you have a cat, cage, food, etc, and one gets happy, but then the other gotta top theirs as it takes their food happiness away, repeat.......I'll give a go at it, bit diff but similar: cat finds food......human puts cat in cage......cat unlocks cage using key at back....human goes in cage to remove it....cat barks 8000 times....human puts the mozel on the cat.....cat hits it against the cage, human opens cage to get it back on.....cat bites human....lol

When I dream, I can see anything, I was in a huge desert full of buildings, no end in sight, creatures walking up a grass oompa-loompa land. Sometimes I have a new body and even can feel tentacles around a sewer wall while in my new body not even touching that wall (I have a skin around the wall, linked to me...). Clearly, all we see can be dreamed for sure.

The butterfly effect is like a tree, there is more and more paths as time goes on. In that sense, time is the amount of steps. Time powers Search Space. Time=Search Space/Tree. So even if you have a static tree, it is really time, frozen. Unlike real life, you can go forward/backward wherever want. Not sure time powers outer space though, outer space is outer space.

I agree compression=intelligence. Compression has to do with the neural model, representations. But even if maxed compressed, it won't be AGI. For example AGI has a reward system, passing this feat won't involve rewards. However, RL is a form of generating actions (think motors/ brain wave frequencies at birth), and aligning the tree (hierarchy, heterarchy), which compresses the tree/time. RL finds the correct path to go down, and if enough do same path, these paths define th tree structure alignment for speed/size efficiency. So really, reward system sorta is involved in compression, but not exactly since they are just markers. Stefan says "but Like a machine that answers questions, recognizes things visually or what have you? Is this related to AI at all?" yep, compression doesn't do all dat obviously lolzz.

Let's look at GPT-2. It trained on 40GB of text. Yet the 374M parameter model only takes up ~800MB of RAM or so. It's not lossless, but it can recognize the meanings of similar words/sentences (including entailment pattern mining like if-then rules). It's compression is great, speed is great, recognizing is great, but you gotta go lossy. Humans do have lossless memory too. But if it was all lossless, then the speed, compression would be horrible.
You would say it instead like image recognition VS imageS compression, the latter is the network that stores a compression holding them all. Yes, the 2 may be different though.

Pattern mining....this is relational similarity similars, like Glove. Like a pattern of the same repititional floor pattern. Like if-then rules in sentences. Entailment is what Glove IS! So translation/entailment is same, it is patterns, i.e. dog=cat, if=then. These are trends in the data because they are seen entailing (by ex. Glove) multiple times whereas in pure random text ex. 1 2 3 4 5 6 we never see 1 2 3 1 2 3 1 1 1 2 1 1 1 2 1 2 3 1 2 1 2 3. The reocurrence of 123123 or 1236662377 shows with guraenteee that likely indeed 2=3.

We are machines, your home is a machine. The brain is a closed system when viewing others, but your not "you", its all machinery actually. There may be some magical consciousness but it has nothing to do with the physics really.

"Have you your gun?" works as a pluasible recognizable sentence because the correct words are close nearby enough as Glove would say, in fact have could be you and you could be have (have=you).

Yes more people make more stress for all to conform and leads to innovations whick leaks to all fast and upgrades socity hierarchy and repeat. More people means more competition, more innovation, and more vaulnabilities. The innovation makes MORE people, moe competition. Eventually they all got food, sex, immortality, and there is so many and they all cooperate. Equalibrium. They may grow though because of galactic safety size needed for probability of no bomb attack on the utopia sphere.

A society may become through gentic mutations a domestic kind individuals than tigers, like dogs. They coney language better. They get the same rewards but better.

Amazing how the Paper "" says too what I said - cells, brains, groups, and even the AGI includes multi-agents AND evolution learning. Nice.

Make the rest the same and add more
a := b
c := d
e = f
>
a := b
c := d
e := f
g := h
A turing test. When I look at it I see letter > : > = > letter and i can even edit it as commanded and add more too until satisfied in timee restriction for me and the readers. So I add afftterrr first ones the missing, and add more.

Just like we adjust motor actions, observe, adjust, repeat....we do scientific method create hypothosis test, test it, refine the theory, test again, repeat.

As Open AI showed on their website, our lives are ever challenging goals, because of competition with multi-agents, you'll never be at true equalibrium peace, you'll be forced every day to hide, work, and worry how to survive and persist. There's never any relaxing just yet really, we all have new issues once the old ones are dealt with.

Nice, not just translation of image, entailment, summarize, segment, but can also recover the network if given enough images of cats!

Yes interesting, improve the self-regenerating stem cells's regenerating, replacing self-regenerating stem-cells, and replacing multicellular organs.
In one sense, if you lost the lower half of your body or some stem cell plupoentness, you may need to add back it, but maybe signaling correctly will play with what is already there actually. Both are possible.

Weight Agnostic NNs, nice! It begins untrained and but the random initialization is tuned to be biased for ex. the Task of self defense in a baby and then the training makes this Tasked network simply only better!
I see they say start with minimal low amount of nodes/weight connections, many of them, test them with range of shared value weights, rank them by performance and complexity/uniqueness, then create new population by varying best networks. In one sense my idea for building gradually a binary hierarchy for text was spot on, its 2 child leafed nodes, any number of parents, and starts off with few items, and builds on as it goes, gradually, instead of all connected. And my adjustement of the net was maybe what they want, it aims for size compression and wave propagation speed performance by making the net even complexity and less node count. For entailment parts, segmentation. And Glove heterarcy same thing basically. For traing theirs, they then look for the best shared weight parameter.

More images/answers/properties help vote in on which word/phrase a word or phrase IS, or which to add on as Next Word, or even for the hierarchy adjustment updating where we readjust the segmentation based on lowering node count cost to learn from the big data and be trained.

https://arxiv.org/pdf/1804.01508.pdf
My understanding is it is a simple explainable Online Learning neural net that works using bits for small memory footprint and it uses AND OR XOR operations for speed and does a Game Theory method to update Global Optima over Logical Propositions. Now how to draw that in one image I'm still yet to try but I bet I will get minimal luck of taking away anything here without some guidance.
So maybe it works with logical knowledge of "if you fall you can get hurt and die", yer...the AND OR XOR is included in this kowledge GPT-2 idea of mine btw. As for the game theory um, as said for images/text and not multi-agent it is more local optima i guess but also global background as said - thinks about fashion etc discoveries while I work on AI and girls.
I did find interesting the idea of bit unique numbeers like 1010001010110 can be a um...not sure

Yes when see baseball coming we know the predicted path, and, we also adjust as it comes too.

Not only can you let the shaking the frie plate make the fries fall down on own naturally like in heterarchy Glove, but also for any hierarchy, like Glove you can add in a unique curveball that gets things goin/unstuck and make it equalize. Sorta like the oposing team screwing up your plan, like Open AI's Hide & Seek.

People care about temselves when not fully happy, but as intelligence increases and are happy they do for others too.

Open AI's Hide & Seek says they can see objects, others+their velocity/position/size, use Transformer attention on entities, have also lidar (why another sensor??), and know the time remaining for thir reward +1 -1 for their team, a global policy that knows all data (why? wtf), agents can move forward/backward and have binary choice to lock/unlock and grab object if in cone vision.

The man wants the women AND the job (women don't have a hard time finding cash, they can do escorting and always have mles flocking them)......foreigners I agree they work hard/scam to get their game, they will take your stores and find what they can......as for the elites, I agree there may be some sick stuff going on, the AGIs will just eventually overpower us baboons lol
https://thefinancebuff.com/do-immigrants-work-harder-if-so-why.html

When you see a guy talking loud and tough, this is to look like he has his act together, get priority, stop other tigers from eating him, and attract women's attention from farther away.

UL is auto sorting, you have a goal still but that's it.
SL is like telling it to match some data or gather new data.
RL is similar but after training you know what decisions you have and decide at each step, with probabilities.

Wow food is lost by metabulism from exercise and breathing the food out of your mouth. 
https://www.cnn.com/2018/03/26/health/lose-weight-where-does-it-go-partner/index.html

Agreed. You can work on AGI by just studying on the internet and unifying it all. All you need is time, and a reason for gathering the data and unifying it (are dying and want immortality). The key is in the data.

Haha Stefan I was thinking that too my bro. I agree compression=intelligence. But even if maxed compressed, it won't be AGI. For example AGI has a reward system, passing this feat won't involve rewards. Compression has to do with the neural model, representations.

series of goals...
(Goal, Conflict, Result, Emotion, Reason, Anticipation, Choice)...
(Body, Emotion, Awareness, Reason), is the fast data pyramid...
scientific method (SCI METHOD) by generating hypotheses and performing experiments...
(Data, Information, Knowledge, Wisdom) consolidation pyramid
Haha that metabolic system is huge and i expected to see that when you mentioned it.
>
Hmm...bit complex of a design...it's more Data=hierarchy+RL=prediction/anticipation_choice_reason_Goals/Results,
You also say Emotion twice and (Data, Information, Knowledge, Wisdom) are all the same fellow lol.

"Its unethical to think that the solution to ai is that bloody simple,   are we even reducting the worth of what we are doing as well???"
If you went through every atom of your house when describing how to build a house complete with 2019 dishwashers, you'd get stuck. All you gotta do is say metal bar, etc and buy me a qualified washer. Or you could just say "build home". High-level talk. We use past experience to cover similar problems. Use the hierarchy.

Mentors would be better because they are AGIs on call, not static text. 20x faster.
Just think about all the disorganized text all over the internet....finding what you want.....:D
It depends on your goal or intent. A certification is useful - you can get a paying job at Google even, and using a course to learn is not bad at all. It may be slower than mentoring. You can try both ways, one at a time though so you can focus.
You would want not only a course, but many articles/etc, and mentor friends. We sorta do that already, a bit.

Ah nice, a graph net has low complexity and low temperature if it can be compressed and has higher probabilities inside. More order and wave vibrations.

OpenAI's Hide&Seek bots have the same thing as gpt2, past and future histor bidirectional just like Glove does - looks on both side. It looks at last words or OBJECTS as do in the sim word for the hide&seekers, and the next words/actions to chose from. That is bi directioal looking indeed.
If you put false facts in a brain then indeed it can make discoveries that christ did see the cookie monster......and yes "monster did see by him the christ" matches enough in the hierarchy even if word ing are backwords.

You could have a human control in a suit a huggggge android that can build tall skyscrapers and megastructures. It's just faster and bigger u, like the giant that helped build the large child fortress in jungle book (I made that up).

I sorta like it. I'm in. We want change. NOW!! I got my own plan underway hehe. There will be the chosen and the justified ones. But the ASIs will decide who gets it the worst and first. Lol. Maybe.
Some cells commit suicide. Some humans commit suicide. And so do some cities. It's Dropout and Data Cleansing and diverse selection collecting vice versa. :-)
They take others with them...
Just like W2V, related words are helped by angels

Just like cells reject foriegn cells in immune system, humans do, and groups do. So do brain Glove clusters for word meanings. Similars are o-k. Satisfaction.

Transformers tracking progress:
https://devblogs.nvidia.com/training-bert-with-gpus/
https://openai.com/blog/gpt-2-6-month-follow-up/
https://openai.com/blog/microsoft/
https://openai.com/blog/sparse-transformer/
https://openai.com/blog/better-language-models/
https://openai.com/blog/musenet/
https://openai.com/blog/learning-dexterity/
Let's track Transformers's progress. The singularity is now right around the corner. The information hive is synchronizing and big 4B dollar relationship connections in internet channels are being formed! So are these large language models. It's all about information aka language. Output it and intake it. And sharing. GENERATIVE MODELS!!!!
https://ai.googleblog.com/2019/09/learning-cross-modal-temporal.html
https://openai.com/blog/emergent-tool-use/

OpenAI; I think that's wrong to say. The ideas are all there, just the organizing is yet to come (higher layers). And the AI they are creating is becoming more and more advanced in how it works and what it does. There are applying it to new domains and learning what Transformers etc can do. They are practicing. Their main mission is true AGI and they seem to be on the ball.

"We must first accept and understand that there are intelligence structures bigger than ourselves and some of these structures cannot be fully modeled by one puny human brain.
And some structures are vastly inter-generational... and some may be designed or emerged that way across generations to positively effect future generations."

"A human brain is merely an instance node on the graph of brains. All biologic brains are connected by at the very least DNA at a very base level. Not a closed system.  It's easy to focus in but ignore the larger networks of information flow the human brain or an emulated brain are part of.
For example, when modeling vehicular traffic do you only study or emulate one car? Or say when studying the intelligence of elephants do you only model one elephant? If that's all you do you miss the larger complex systems and how they relate to the structure and behavior of individual components. Also, you ignore the intelligence hosted by differences in brains in a larger web of pattern networks."

Yes I agree with these posts above and you guys seem to have picked up my ideas too.
We could say our molecules make the decision K :) The group really does have its own noggin making the decisions. It's true that if we were disconnected and we would not be able to get this far as fast. Same for a brain vs molecules.

Describing intelligence is easier when ignore the low level molecules. What if we ignore the brain and focus on the hive? May lead to what Open AI did with the Hide & Seek bots.

Imagine you have only 1 agent, and the other bots are the environment! You make a hut to stay in. It rains. You fix it. It snows. You fix it. A tiger enters. You add a door. I still think this is costly but maybe it'll work.

Well, we duplicate and build all our technology just to persist. So all you need is a intelligence that finds ways efficiently.
Nodes have been dying ever since they were given life. But the mass is STILL here. Persistence is futile. We will leave Earth and avoid the sun.
You can hear the ratchet's hypertrophy screeching.

You already like being yourself, just deal with it that you make the group hive.

looks like in the end of the video they gonna try snow worlds :D cool. They could try so many environments...cage and cat and food bowl...etc.......

On the video posted by Open for their Hide & seek-ers RL brute force sim using Transformers and RNNs:
Ah, so you meant kill all other animals and/or trap them all, instead of hiding. So, why not? Well, it makes sense why, killing all others is only possible in end stage of technology (and not nice), and same for traping all humans in their homes...yeah right for now....so hidin your own arse for a given human is super easier....just move to Thailand lol and build a bunker. So openAI's red/blue bots are faster to just hide, that's why they hide and not trap the other team.
"1:13 AI: learns to steal
Devs: “WE DID NOT EXPLICITLY INCENTIVIZE ANY OF THESE BEHAVIORS.”"
That's hilarious, the AI is learning these things on its own. We need one for the pedophiles that evolved too:
Learns/(decides?) to breed with youngest and even desires to raise them as well - to breed faster and overtake all enemies in a finite limited amount of time as possible with limited resources.
Devs: “WE DID NOT EXPLICITLY INCENTIVIZE ANY OF THESE BEHAVIORS.”
Even stealing others food & mates in the wild happens to many animal families every day, nests are torn apart by tigers or birds, it's horrible. But for them, they are born with rewards that seek to eat and mate. Some eats their partners I heard I think. One's loss is their dinner. Crazy world this is and it's hard even for me to live. We all have problems because of others.
As you can see on Earth, with humans, more and more cooperation seems to be happening, and stronger connections. Today, very few will attack you to eat you - they have food. Animals do though :)

It's like planets, when there is too high mass/complexity data, it wants to fuse it and then explode and Dropout. Like people that explode and use tnt lol. And they suck others in nearby like large planets, or, fry them nearby. It's decompression from an highly encoded state.

It's as good as the data in.
It's not about what you want, it's about what physics does. It's about the data we have, not your little idea of a new language.

"
Event 1 happens before event 2 :=
  There is a timeline.
  Event 1 is at position X in the timeline.
  Event 2 is at position Y in the timeline.
  Y is to the right of X.

That's one way. Sure, we used other words, but these may happen to mean something to the AI already.

We could do it even simpler with numbers.

Event 1 happens before event 2 :=
  Event 1 happens at time X.
  Event 2 happens at time Y.
  X < Y.

So, in the presence of some event recorder module, this could already be used to check something like "Did the window appear before or after you pressed the key?"

Yeah, I'm all about desktop automation...
"

"Prior" is the word you want because it means previously.
or
You have x>y.
Event 2 happens before event 1
=
Event 1 is at position x, event 2 is at position y is FALSE.
=
False that this is Not False.
=
Z

I agree Ben the probablistic connections in ex. Glove evolve and have a equalibrium entrophy. At first there is a lot of heat going different ways. Nice Ben! Information is in math, the thinking brain, quantum mechanics, communication theory, statistics, the physical world, thermodynamics, it is said to make those things up. Information (and hence those things) have entrophy.

Mathematically, we begin with Ellerman’s notion of “logical entropy” [Ell13]
and extend it to a new concept of “graph entropy” or graphtropy. Graphtropy
is an extremely simple quantity – merely the “mean connection weight” in a
graph. However, this simple measure turns out to have an intriguing semantic
interpretation, if one focuses on “distinction graphs,” in which the existence or
weight of a link between two nodes a and b denotes the propensity of a certain
observing system to confuse a with b.
Conceptually, our key innovation is to take the observer seriously. To understand information, we suggest, one should begin with a specific observer, and
ask what distinctions this observer is capable of making. This leads to a distinction graph and to graphtropy as one among a number of interesting quantities
to compute.
The answer to the question “What is information?”, is then posited to be:
3
• a body of information is a set of distinctions, drawn by some particular
observer
• There are various ways of quantifying the amount of information in a
certain body of information (i.e. in a certain set of distinctions). The
graphtropy (the mean amount of distinctiveness between entities) is one
key measure of this sort.
Logical entropy, with clear relationships to traditional Shannon entropy, emerges
as the formula for the graphtropy of a body of information if one makes certain
special assumptions about the structure of this body. These assumptions are
not accurate for real observers, but are often good enough approximations for
practical purposes.
On a philosophical level, the perspective given here is inspired somewhat
by the thinking of G. Spencer-Brown [Bro69] and Lou Kauffman [Kau87], who
have emphasized the foundational role of the concept of distinction in explaining
how an observer creates a world, and how a world creates an observer.

Ben says we create a graph of nodes like Glove and that is the information - distinct clusters - animals, buildings, etc. Indeed, the net is like physics world we live in, there is vibrational waves that synch and come to equilibrium!

If one assumes an observer’s memory retains
data about distinctions observed between states of a system, then one arrives
at the conclusion that the graphtropy of the observer’s memory graph will never
decrease, which is a well-founded cognitive version of the Second Law.
Of course, a real-world system’s memory is going to be strictly bounded
relative to the totality of its historical perceptions, and so the assumptions
underlying the graphtropic Second Law will not hold.

Yes, the brain net is not only a observor but also an information existment/processor, info=machine...
Similarly, B’s division of objects of sensation into categories may be understood from the view of a third observer, C, who looks at the state of B’s DDG
upon exposure to these sensations, and notes if there are differences between
sensations in category x and category y, etc. One does not escape the ultimate
observer-dependence of all distinctions. But one can push it higher and higher
up the hierarchy, modeling observers, observers of observers, etc. using a simple
model of weighted distinctions associated with causal implications.

Yes Ben:
Each time two nodes are viewed as distinct, this decreases the weight on
their link in the weighted memory distinction graph.
• But when two nodes are viewed as the same, this does not increase the
weight on their link as much; because it is assumed that their existing
differences are simply not being observed at that time.&
Why:
observing a ternary link e ∗ f = h at a certain time , increases the odds
that another similar ternary link will be observed in future
• the fact of not observing a ternary link e∗f = h at a certain time, increases
the odds that no link similar to this one will be observed in future – but
by a smaller amount
This assumption appears to follow if one assumes that
• The observer’s window into the world during any interval of time is quite
incomplete (so that not seeing something at some moment, is not a good
indicate that said thing does not exist)
• The world is not utterly random but has some continuity over time – so
that the relationships existing at one point in time have a better odds of
existing in the future, as compared to comparable randomly selected sets
of relationships

I think Ben you got it backwards, the net can start with all words/phrases there, as being all non-similar, then they start tieing together in this Glove heterarchy. Both do same thing. So you could start all in Glove as distinct, then adjust the strengths stronger. However faults happen, reversing a change is sometimes needed. SO, starting off either way in Glove, you will ex. strengthen a connection if you see similarity ex. "my cat eats" == "my dog digests", and likely will more if had already many times (probablity), but you don't often decrese relationship, only slightly overtime plus if see proof to satisfy in knowledgebase.

Real-world intelligent systems are necessarily somewhere between these extremes. Seeing everything as identical will not lead to interesting complex adaptive behaviors with high probability. 

The practical inability to store and recall all of one’s past particular distinctions, leads a system
to generate abstractions, which can then be used to imperfectly and probabilistically reconstitute past distinctions and to imperfectly and probabilistically
apply the lessons of the totality of past distinctions to the future. Intelligence is
a way of coping with the same memory imperfections that violate the assumptions leading to derivation of the Second Law. Intelligence has to do with the
regulation of the ongoing gain and loss of distinctions that keeps the graphtropy
of a complex adaptive system, most of the time, within reasonable homeostatic
bounds, and in states of consciousness that are neither purely individuated nor
fully oceanic.
specific environments
will tend to lead to the survival of systems with specific patterns of remembering and abstracting

The brain network is made of neurons, electrical potentials propagate as waves, these elecctrical waves synv and do what Glove does, but in reverse as said above - the inhibitory neurons are said to synv up, the excitory are said to be independant (error correction, rejoining nodes!). The brain starts off fully connected, and maybe, fully weighted/stregnthened.

A language model can tell you how probable a word is in a language, or a sentence.

The below is more tricky to discover they are in Atlanta too. But its simple, it uses a transfer = idea, like Glove. See! Like, The, =, etc are key words.
"The fourth Wells account moving to another agency is the packaged paper-products division of Georgia-Pacific Corp., which arrived at Wells only last fall. Like Hertz and the History Channel, it is also leaving for an Omnicom-owned agency, the BBDO South unit of BBDO Worldwide. BBDO South in Atlanta, which handles corporate advertising for Georgia-Pacific, will assume additional duties for brands like Angel Soft toilet tissue and Sparkle paper towels, said Ken Haldin, a spokesman for Georgia-Pacific in Atlanta."

"Chinking
Sometimes it is easier to define what we want to exclude from a chunk. We can define a chink to be a sequence of tokens that is not included in a chunk. In the following example, barked/VBD at/IN is a chink:"
Haha, maybe good for data compression!

yes there is infinite English sentences you can write and humans understand unseen pieces. We do the below. And someone had made a short phrase into the below LOL (injections....translations, rearrangement, etc):
"[You can imagine Piglet's joy when at last the ship came in sight of him.] In after-years he liked to think that he had been in Very Great Danger during the Terrible Flood, but the only danger he had really been in was the last half-hour of his imprisonment, when Owl, who had just flown up, sat on a branch of his tree to comfort him, and told him a very long story about an aunt who had once laid a seagull's egg by mistake, and the story went on and on, rather like this sentence, until Piglet who was listening out of his window without much hope, went to sleep quietly and naturally, slipping slowly out of the window towards the water until he was only hanging on by his toes, at which moment, luckily, a sudden loud squawk from Owl, which was really part of the story, being what his aunt said, woke the Piglet up and just gave him time to jerk himself back into safety and say, "How interesting, and did she?" when — well, you can imagine his joy when at last he saw the good ship, Brain of Pooh (Captain, C. Robin; 1st Mate, P. Bear) coming over the sea to rescue him...
"


Yes you can keep adding on and say s but s when s (summarize into smaller representations.):
"a.		Usain Bolt broke the 100m record

b.		The Jamaica Observer reported that Usain Bolt broke the 100m record

c.		Andre said The Jamaica Observer reported that Usain Bolt broke the 100m record

d.		I think Andre said the Jamaica Observer reported that Usain Bolt broke the 100m record
"

Humans connect words to visual real world object words, we made it up, so while it evoves most can learn new words by use of context, it's transferable.

"I shot an elephant in my pajamas" who in pjs? humans usually wearr pjs...elephants are too big, and it says "I" and "pjs" - they equal same thing, including "my".

"Having read in a text, can a program "understand" it enough to be able to answer simple questions about "what happened" or "who did what to whom"? Also as before, we will develop simple programs to process annotated corpora and perform useful tasks."


bELOW i READ THE aCUITIS PROJECT THREAD, THE TAKEAWAY:
While I pursue my attentive goals, my background subconsciusness i.e. glove/etc pursue random thoughts like why are turtles green or do clouds eat?.
Yes i like it the agi net compresses to do what gpt2 can do and then it asks us/itself desired questions and only stores what is likes/convinced on - more compression attention gates.
"The Stream of Consciousness, which holds all of the "Thoughts" generated by the various parts of Acuitas, has its own thread. That thread's job is to make Thoughts decay over time and clean them up when they've been in the Stream too long.  Another important thread examines the current contents of the Stream periodically and decides which Thought Acuitas will give attention to next.  Yet another thread is responsible for browsing the memory database and producing Thoughts about its contents.  So you basically have a producer, a monitor, and a consumer for the Stream that are all independent threads."
No....there is main and sub attention Qs it asks itself/others, and these are the 3rd you say (does GPT-2 thing. It's a cycling feed that creates or gathers, saves, then asks/hears more using saved....forgetting memory is another thingy the brain does..........."I am sad" vs "i am sad for tofday" - how long to remember this? First, you may forget it. Otherwise if it survives - You know from past experiences it lasts not long that you=sad. The AGI can also ask how long it'll last.
Other guy: "I eventually decided that a short-term memory only served the efficiency of information searches, so I store everything in one and the same memory, just with timestamps so that information searches can be limited to a (recent) timeframe. I also don't consider anything an absolutely fixed property, but consider everything adjustable properties, some just more "defining" than others."
Haha yip "it would be ridiculous for me to ask her on a daily basis, "Are you still a teacher?" or "Is your husband still alive?" or "Do you still have a nose?""
The AGI has goal questions, it doesnt ask them once answered and off the todo-list. Satisfactory is even important for updating the rest of its out of date knowledge:
"Perhaps your bots might want to check back say, every month to see if your spouse / partner is still alive or doing well, depending on what information you have previously told it. Perhaps you've finally retired from teaching last week and the bot wasn't informed? Pity that! Heh!!" Haha yeah, again the AGI asks for old facts for updates even to frequent, cluster-centrical, un-interesting/desired facts.
Ah, time is felt by the AGI because it has x amount of time and can track how fast it's going and if will meet deadline death/other date and is all about attentional sacrifices - feeling time is in the attention probablistics in its decision canidates/goals. May drop questions, or canidate answers!
"Several bots spend their "idle" time, "thinking, researching, comparing, pruning""
"When humans evaluate time, they do so by comparison." Yes that too.
GPT-2 does most of what your project is about WriterOfMinds.
Yes WoM cat=animal=person=object......chair =furniture=item=object.
Sleep is good for intensive Glove / random sub conscious processing/clean up, adventuring without disturbance.
If "something is a kind of something else" then "something else cannot be a kind of something". is same as if b is under a then a cannot be under b. It just matches a stored phrase, namely: "x is a kind of y, y is a kind of x, is contradictory"
I might want to refine this process so that it doesn't necessarily remove every redundant link.  There could be some frequently-used shortcuts that justify their use of storage space by improving search speed.  One might want to tailor the aggressiveness of the link-pruning based on the amount of storage available.
Ineed hearing "a cat is not an animal" or any other possible such phrase should match to "is a" and see it is contradictory....
All, some, none, always, sometimes, never, is, and, or, nor, xor, if-then, maybe, false, positive sentiment, human-written, fake, evil, high, etc are all words/entailers to a match, it allows satisfactory to Glove/hierarchy graphs where enough are seen and matches it/enough activated and says "cats always eat" or "this is human-written".
All dogs are mammals - Some mammals are dogs ...... Its an automatic inferance.... All A's are B = Some B are A Implied.....
1) … what to store/compress/expand (and what to forget), 2) … how to organize stored material, and 3) … how to access relevant stored material when it is needed.
"When Acuitas is telling someone what he thought about today, he now has the option to check episodic memory and see whether he ever thought about this concept before, and how long it has been since he previously did so.  He then generates some comment like “I've not done that in a long time,” or “I did that a minute ago also.” "
Yes you can say what is what ex. "joe's cat".
* Primacy: was this the first time something happened? * Uniqueness: how rare are memories similar to this one? * Complexity: how many features does this memory have? * importance/desire.
The above *s affect which to prune, pay attention to, store, share/update others. Don't delete frequently used nodes nor unique nodes nor complex nodes. Well, forgetting happens automatically if not frequent too bad haha, except unique nodes, however in a-z there is ONLY 26? Loved/hated nodes experiences are remembered though. Ones far in future are recognizing up a node that gives it less impact, however my plan only gives me reward in future and is most important, i work every day 24/7 on it. Immediate survival is daily focused indeed though i.e. important. I seem to have made death in future end-all and the now if i don't act. A leads to B. Act=no death........things are remembered if the above *s plus if ranking/related to such and are frequently observed. These also become attentive questions - frequent ranked unique high-node
Logic is about saying if cat is of animal but not vice versa, or if can be, about OR, AND, etc, refuting truth, =s, summarization, segmentation, 
Given: Jane is a human.
Given: Humans are animals.
Given: Animals are [either] male or female.
Given: Jane is female.
Q: [remember] Jane is male.
A: No, Jane is not male because Jane is female.
&
• if a X is a C, and Cs are A or B, then remember X is A or B.
• If X is A and X is A or B, then forget X is A or B.
• If X is B and X is A or B, then forget X is A or B.
• if X is not A and X is A or B then remember X is B and forget X is A or B and forget X is not A .
• if X is not B and X is A or B then remember X is A and forget X is A or B and forget X is not B .
&
my thing goes like:
choose where to look
clarify/decode
see alternative entailing choices
pick one
trranslate for if is a better fitting choice
Implement it,
Evaluate whether the problem was solved or not. 
revise it
&
You can discover why a stomache is in one person/not based on if they only ate/didn't oyster of all other people's charted, or if all ate oyster only, or both methods, or by # of oysters aten the severity.
You preferably start with Deduction, using past experience to predict future answer (no present, only past & future). This is really perfect. Otherwise: Induction ex. you see people eating burgers and assume enough did so I likely will enjoy burgers too, but not guranteed. Otherwise: Abduction ex. you see a dead body and know they may have been shot or had heart attack etc etc, no one correct answer, just possible answers loool.
AGI/all is a hierarchy that has vibration waves synchronizing and regeneration/emerging in physics occuring. Hierarchy is cells, brains, clans, and yes even these synch, and even self-attentively regenerate using translate entail/summarize/inject segment and even the very words in text or images are object/verb/locationWhere i.e. entail/ translate/ segment, and all even ask each other for updating satisfaction - not just gpt-2 self attention, but groups, and ceells, i will ask questions when need or when old data for updating my knowledge, and so will others ask me themselves.
--------
HAHA YEP!
"I noticed one thing about time. When one is employed / working, Time carries an importance...to wake up, leave, make it to work on time, lunch for a prescribed period of time, working on projects, getting things finished, finally to leave work and commute home. Then it's time to eat dinner, then time to go to bed and do it all over again, while paying attention the day of the week as most do not work on the weekend.
&
When retired, time doesn't mean as much or carry that same importance as it did when employed. Hours can roll by as can the days of the week and which one or which day it is, doesn't really matter that much either. Funny how situational awareness correlates to time.
&
I guess most bots don't really have a need for time except to please their botmasters or customers."
--------
Continued stage:
This month, I did some work on cause-and-effect reasoning and goal satisfaction, which introduced the conversational possibility of asking Acuitas what he wants.
&
I leveraged the text interpretation upgrades from last month to implement encoding and storage of conditional relationships, such as “if a human eats food, the human will not starve.” These relationships can be remembered and used to infer the effects of an action. I also threw in the ability to learn that a pair of concepts are opposites or antonyms.
&
Then I implemented some inference mechanisms so that Acuitas can determine whether some action serves – or contradicts – a particular goal. Acuitas will now claim to desire things that support one of his goals and not desire things that contradict one of his goals, while remaining ambivalent about everything else. The examples below reference a self-preservation goal … not because I think that should be the primary goal for an AI, but because it's one of the easier ones to define.  In Acuitas' knowledge representation, it basically comes down to “Self (has quality)/(is in state) 'alive' or 'existent.'”
&
With this goal active, Acuitas can answer any of the following:
&
“Do you want to be alive?”
“Do you want to be dead?”
“Do you want to live?”
“Do you want to die?”
&
… where the last two (live/die) rely on verb-defining links in the semantic database, and the two negative versions (dead/die) rely on awareness of opposites.
&
The most complex inferences currently possible are illustrated by this little interchange:
&
Me: Do you want to be deleted?
Acuitas: I do not.
&
To produce that answer, Acuitas has to retrieve and put together five different pieces of stored information …
&
*If a program is deleted, the program “dies.” ← From the cause-and-effect/conditional database
*I am a program. ← From semantic memory (is-instance-of-class relationship)
*To die is to transition to state “dead.” ← From semantic memory (verb definition relationship)
*State “dead” is mutually exclusive with state “alive.” ← From semantic memory (opposites)
*I have a goal of being in state “alive.” ← From goal list
&
… to make the inference, “being deleted would violate my goals.”
&
The features still need a lot of generalization and expansion to be fully functional, but the groundwork is laid.


Hi! I read the paper and follow the Transformer architecture etc.

( As noted below, some behavior seems only discoverable by randomly finding it. )

I have an idea how the bots are learning. First, I noticed each team takes a turn at finding a solution. It takes many games. Then, the next team finds their solution next. This looks like a step-by-step learning. Where they are randomly moving and hit a solution. But it's not full brute-force, they take random actions and many are a solution, and the attention/etc aids, making it a partial-brute force? I noticed the bots move/grab perfectly just in time, and they start off randomly moving around but learn what to keep doing and what to add on to their plan. It seems progressive and seems like it is something simpler going on.

Take for example the way at final stage the one blue hider moved the box in position *for the other team member for when he arrived back it would be closer. Why? It must had randomly tried actions, and found what worked in its super short time limit. What aided it I'm assuming is Attention, but I wonder why...how would Attention help it take better random actions?

For example, it needs to pull in the ramp so they can't jump over the wall... what tells only 1 blue hider to go to the other side of the room and where to bring it!? This looks like a brute force/normal RL. I don't see how the attention/sensor RNN thingy is aiding the solution finding to occur quicker.

P.S. What about only 1 team in a rain-forest that has to avoid rain entering the hut they need to stay in? Same thing but the environment is the seekers/eaters instead of needing other agents to find/eat/end them.

I also like how yous skipped learning to crawl/run/grab, just like skipping simulating atoms and aiming for high-level water simulation.


https://www.upwork.com/ppc/landing/?gclid=CjwKCAjwldHsBRAoEiwAd0JybXHg8xW3OKkea83DjB7T7G4K76HpCGI6R0GALI-a8lb4LwR23aH5kxoCNOEQAvD_BwE
There's many many endless thousands of programmers here from all places on Earth that each get paid on average 30USD per hour! Some 100USD an hour if you are skilled as hell, (and artists, music producers, and more). There is thousands of jobs for Deep Learning, ML, data science, data structures, tutoring, knowledge representation, you name it. When you say you are out of a job, you are super-duper-zooper wrong.
Wrong link. Just Search Google for "Upwork". It is for both programmers and buyers. See post above for detail. You are far away from being out a job indeed. And, the pay is so high its cheating.
I know Upwork can look intimidating to earn money...but trust me you gotta find how to get your first good review, repeat clients, and Big jobs (easy to get a 5,000USD job I'm not lying). Just coding you are great, yet if you do Deep Learning or Audio signal processing or multiple languages you are even more able to find more bigger jobs.

Well, after our galaxy is turned into the final state, 4 galaxies nearby will become transformed....then 16.....then 64......then 248.......until heat death if no heat life.

When the cell emerged, it wasn't lucky to get a mechanism that can replace parts.......physics allows it to emerge/repair, molecule chains link and DNA lines up atracts duplicates.
Neurons die each day, humans too, even groups, cities. But the mess/agenda is still here getting better. The model/you is still alive. Immortal.

I already have the answer, you are a machine and as long as your model/agenda ensues nothing went wrong. Don't matter if you change your organs or add neuronal metal nodes lol. You're only a machine. Of course you'd stay u, and continue your jobs.

So the one above you mean to say you have 2 people that believe the same false answer is correct, and you drag them into local optima with a false problem and supply an answer that solves it but it is not the best way to solve the issue? But this doesn't work if someone knows the optimal solution or uses a fine tooth comb with enough time on their hands. Nor is it more powerful than AGI, I mean, sure you can trick anyone but not a fast thinking highly weaponized omega hive that has all power to use tons of time and knowledge as they desire.

We scream but in a modern way - the internet. How do you punish one on there? Well if you are dealing with them then likely they are very powerful and verified, then you can leave feedback and affect their built up representation OR build it higher. In the case of anonymous chit chat, there's nothing you can lose but a bit of time and information while can gain it back.

Yes but you must live in San Francisco to work at Open AI. Technically, this can be overcame, for example if you made something close to AGI, and I think they work with a Google team on the internet, and already all the information on the internet is as if we are all part of the same team lol. I noticed one fellow named Ian Goodfellow had his idea called GAN picked up all over the Earth because it was useful, and while he may have already been familiar with them / had written a formal paper etc etc, anyone can spread their idea if its good enough.
This is OpenAI lol. You have to live in San Francisco. Jack said in email to me once "for now", at least. Though clearly they learn on the internet info from everyone! and work with Google teams. Indeed good ideas (and inventors) take off, like a petal in the wind and replicate.

You could say a brute force is an universal learner, but takes much much longer. You can do 517 x 827 in your head if given more time but its akin to the brute force idea. Brains are better, but calculator is better...Is there a single universal learner that can solve any problem each in the least amount of time possible? Brute force can but in the longest. However not so for small programs. This brings up a relationship to a brain. Suppose you have a mega god as big as a galaxy in its final form after consuming the galaxy, tasked with to move to x location or morph into x arrangement. The closer it is at start, the faster it will be done. Clearly it is not optimal for all problems. It is geared to trying to stay alive, not turn to a solid metal block. Therefore some solutions cannot even be solved! And some solutions that are not done as often are allowed to take longer than could. It IS optimal! And yes, turning into a solid metal block, blowing up in x sort of way desired, or making a certain soup to eat, are all Problems, because a Problem involves moving around matter/time in the least amount to complete the task in accordance with the physics-evolved task made & given by a human. So yes the mega god will take longer or use more resources for some problems but it will be optimal still! Indeed there is no single solver to be 100% optimal, that would be pure-magic! Like making 66 bits into 1 or 0 bits!

My friend once told me a year ago that even my idea needed what a NN has - it knows exactly how close cat=dog.....not just cat=dog. Rather cat=dog ex. 68%. The way to store all these is exactly what W2V does - it uses a net and is efficient. We may be stuck with NNs being a part of AGI.
Every single word connects to every other single word.........with a =s score weight.........that's a lot of connections and weights. And you need them to be precise to progress further plus utilize the translation.

If I film my Big Movie, I may go through all I know and cancel out in my brain what was said, but be careful, I may not say all I know on take2, this may casuse loss of quality. Practice will make perfect though and will remember all, and better explained.

but wait we are not people separate and calcultator is part of optimalness.....we are a large program that is an optimal one that has brains plus calculators in it combined. So yeah we are not separate we are a large single hive that is optimal according to its physics-driven energy equalibrium it reaches (however the utopia god ball may keep growing, without giving back out as much as intakes (unless must stay certain size like a star tries to by giving of heat).

The universe takes, and gives, planets become giants and turn to stars to give it back.....and heat death of universe? Or does it come back as heat? Maybe particles spawn. Maybe we can keep it contained. Maybe the universe is infinite and heat always returns from other places. Maybe gravity pulls it all back.

...In English and Bulgarian... Artificial General Intelligence (AGI), AI, Robots, Thinking Machines, Deep Learning, ANN, Computational Creativity, Natural Language Processing (NLP), Neuroscience, Cognitive Science, Developmental Psychology, Cognitive Computing, Computer Vision, Visual Effects, Software, Sociology, Philosophy, Linguistics, Filmmamking... and Tech and Artistic Stuff and Fun

Although still crossing the ethical beliefs, it'd be more logically harmless and brave to take any and every dead person (young humans/animals) and experiment/study. Though they would not be alive.
Many are already are dying in pain, it's the right thing to do.
&
Alternatively you can take live cells.
&
I still believe I could make Cryonics work with all the time I'd put into it. I think it looks easy to test, ....so many things to compare what improves it!! Lots of tests could be done. Just take clumps of cells.

"Why is compression the key to AGI" Because Unsupervised Learning is the hard part and the most important part? SL and RL are the cherry on the cake.
&
I agree however, there's more to AGI than just a compressed network. For example the task of switching between summarization, translation, entailment, and segmentation tasks, requires the net change its mode. Hell you can even ask it to search a story for the word "duck", or to add every other word after each word in the original story:
&
The [duck] sat down.
=
The The duck sat down duck The duck sat down sat The duck sat down down The duck sat down.
&
I recognize an error, I did every word. Here:
The The sat duck The sat sat The sat down The sat.

AI research, but it encompresses everything (at least, mine does). You could search Google for Open AI website. My goal is human AI made in the computer. It must generate discoveries using past knowledge. It includes translation, entailment, summarization, segmentation. Compression enables its knowledge to be general enough to solve similar/translatable problems. Then its questions it answers are reward reinforcement and chains forward along paths as a branching tree like how can i gain food? Either go down path obtain money or grow own food. There's a lot to it, but it's very explainable (not many can explain it fast but I can). After strong AI is made, it will replicate as nanobots, just as cells and mammals do. This ensures not only maximal computation/storage of information but also max life persistence. A fog of nanobots (group of humans) is like a brain, many neurons/brain nodes in the fog die each day but the mass of a group/the brain stays unchanged. This information loss doesn't affect the mass, and this is same in translation - some words are ambiguous in meaning but the surrounding words help translate the word to a better word ex. "please [stick] it to the door". (becomes [attach]).
&
It seems as above that there's a clear pattern - hierarchies that fill in to replace, and then next off will entail the plan and/or carry it out as people's movements. The more nodes, the more infoKnowledge/mates and the more replication of ideasPlaning/carrying it out, exponentially. So in short, exponentially more plan knowledge will result in replication faster and faster until equlibrium energy fully refined plan carried out as it evolves.
&
IOW the more data or knowledge, the faster the replication of itself -- it self-regenerates faster & better, there's just more chaining and with higher probability of truth/liklihood.

txt and voice is expressive actually...voice more tho i think

The universe begain, and evolution and sex/eating/birth/death/pain has been happening ever since Earth began life. The universe doesn't exist if there is no observer in the machines. So it makes sense the universe will settle into a equalibrium state. But why not Big Bang us a already utopia state that self repairs instead? Maybe the data is too high, the universe began with the shortest information built, enough for physics though, why? So har to answer a weird question, maybe we are an infinite fractal highery of information that can observe itself - something we know on Earth already!

I got a 70$ microphone. I'm going to use it to turn my voice into text, then refine that text, then once again turn the text into speech as it were. The result is I can say a lot more story (plus is refined!) while being in expresional voice! I find as I talk (especially for long periods) I can go deeper than possible and make bigger discoveries on top of what I'll already be presenting!
&
I wonder how K's book is going. When is it done? Can you update the other nodes on that K? Updates refine into better encoding and adaption of the elements. I can tell you one thing, all my movie will be gold, like the shake the bucket of rocks idea that levels them without touching the rocks programistically.

My main goal is to make a human thinker in the computer that takes information and outputs answers to installed goals (food, life extension). It has to answer new questions using past experiences. The answers it generates can translate into motor actions. So story telling is actually "discoveries" and is actually "action" for motors of real machines/humans.
&
You can see on the internet little generated bots learn to crawl by randomly trying movements and then storing which propelled it forth. Open AI seemed to show that ignoring learning to walk and just going higher up does work - they learn to cooperate, barricade, while the running is well they just stay upright by cheating lol > https://openai.com/blog/emergent-tool-use/
&
However I don't think they will learn to walk and the build rockets using simple rewarding or searching through many paths as the above seems to do. I think they need real observations like text/images and to give the output that is most likely the answer. You can know how to open a fridge if only ever opened a door.
&
The bots above seem to know going to objects/running with objects is useful, or going around walls is useful, instead of randomly trying all action sequences that are 20 seconds long lol. So that is like the story generating. With some random trying I think. But lacks the actual thinking still! Lol. Because the decision of Next Action is what's in front of them.


We die because evolution wants fast cycling/ mutation. In with the new, out with old resources. It refines the structures/overal structure.
&
Indeed evolution rotates us to make better structures. Sad.
&
With AIs, they can stay immortal and clone. They can update and transfer their minds to better bodies instead of restarting the process like kids must go through.
&
Yes, things like memories must go too, it's ok we still stay us. And importantly, our agenda is unchanged. I just know my work better day by day it is seeming. As K said I may even lose how i got there though! Gotta go through your work and save it in notepad how you got there. All your facts you learnt. The bad, and the good.


I was working on the compression prize last month ago or so for many reasons. I failed every time but I learned a lot more about trees, bin files, etc. Every time I'd come up with a unique solution that's innovative but however was not verified to do the job well enough
&
One idea was that 9^9^9^9^9 creates a massive number in binary. Using just a few bits. It's most of the file. You can generate a huge movie - at least a few thousand for sure out of all possible movies. Better yet, you can manipulate it with high precision ex. 9^9^9+1 changes the binary data by 1 single bit increase. You could attempt to recreate small parts of the binary stream like this, instead of 9^9 for the WHOLE bloody file.
&
One idea was a binary tree - stick parts of the 100MB binary stream into a binary tree and remember the order the branches lay in. Both "10100001" and "101110101" share the first 3 bits. Binary is already storing massive numbers. So this idea topped the cake, it stores binary code in a yet better code.
&
One idea was  to store the binary bit length (or word length), the dict of all words, and add word by word based on a range working through all possible files. Last I remember this would have worked if there wasn't SO much unstructured crap in the 100MB :-)
&
Another idea I was just pondering is taking the smallest text file, smallest compressor, and finding the maximal compression, but slightly larger files each time. You could possibly learn what max compression looks like.
Also, the decompressor and compressed txt file are smallest when both are smaller in size - the decompressor could have ALL the data in it and cheat while the compressed file has 0 bits,   therefore if both are evenly sized both might be smallest in size! 


Yes I read all of it Matt 2 months ago, it was thrilling to read.
&
@James, I did intend them both were combined. Above is 3 visualizations, each with 2 sticks of a certain length. My point was the size of the data you start with is the same if either stick is the original size...the actual compression begins when you actually try to compress it, making both sticks shorter. Some decompression programs may be larger than others, so it was only a idea that if both are evenly long, both may be as short as possible.
&
I noticed patterns mean compression is possible. A certain algorithm like in 7zip can compress (at least to a fair amount) seemingly any nonrandom txt file fed to it. And a human brain can attempt to compress any given txt file to the maximum amount.
So, while there may not be maximal compression or polynomial solving or fully optimal intelligence (calculator VS brain abilities), you can on a larger scale have a fully optimal one that works for many problems.
Indeed it seems so, that just as one given program can't turn a structure into anything as fast as possible, a cluster of them can, IOW other programs, because often we can't survive with only 1 program, so we end up with a cluster of organs, an optimal one. And in fact this begins to become a single program on the high level. An optimal one, according to its equilibrium state.
&
"The best text compressors model the lexical, semantic, and syntactic structure of natural language. The whole point is to encourage AI research."
Can you correct me. The previous winner program (basically) predicts the Next Word in the 100MB as a lossless predictive program?
And I noticed you link a paper to Transformers, yous are looking into those?

All ya gotta do is get a Cryonics Degree, get a lab, and make a hugggge notepad or whiteboard and fill it up with a bizzilion ideas (with tests). Take out some 20 years and we'll get there in no-time. Put the chips down and you will find the gloryful gates to a utopia with much, much more chips than 80years will feed you. That sadistic high you get when eating is nothing, it will be again. An endless feast.

Agreed the universe may be simpler to begin if its laws are create all universes/laws. Less data maybe. If it began with us all in a utopia that'd been wonderful :), but lots of data. The universe we observe is an ill founded sentence partially, we are machines, and none universes exist, only the one where observers are in the machines, and only the area they exist at, Jupyter is not being observed right now.

jut said lotta stuff but it gotta be simpler....see above


Agreed. "Conscoiusness" is only a word in my book for the attentional filter/temp memory/predictions. There must be however an observer, else the universe doesn't exist at all :D as there is nothing but machines, and as far as that goes, nothing even happens in here. But it's not controlling me, it only observes the lawful universe. Randmonness, may however also exist, where particles spawn/disappear or move faster/new direction (different than clock time randomness injection).


Some religious people are ignorant and close minded to new data, they lack data. Like a child, they will believe anything, and do if friends do/parents do. They say things like well i'll never see my son again if no god, ya well that's local aenestasia trap issue, its answer to wrong question, the truth is there's no god or magic, only in VR. There's hundreds of popular bibles, they were written for governning people, and to fool them maybe. They self reference themselves and basically say to not learn any further data and cus God gives and solves all problems, stay calm, Trap! They are all tweaks of the same thing though. Any monkeys on Mars would create bibles too if given enough time. Learn lasers, microwaves, REAL stuff, we don't bring god to work, all don't agree on that, only work, we don't connect on it, plus can focus and cooperate as a wave in sync. Some things (unlike banana being edible food) in many cases are not guranteed theory/answer, only probability!!!!!!!!!, please be open minded to grasp that, the feel-good theory that is sold to naive folks (bibles) doesn't work and only looks perfect 100% good, only appears to to you cus of so few tests or facts, just like kids are more likely to believe when so few facts are known. 

"So the only thing that needs explaining is this feeling that you are conscious."
When I look out my eyes and see, there's a lot going on, I mean, at the atomic level of what I see. But that's what I see. And by the end, what I see is what you get. I mean, there's a lot in the day I see, it's not static. But I need memory to know that, or rather I get a thought that instantly says "that". By the end, we are only a moment in any present time, the future based on the past. Like an image, or tree branch. And none of it can be proven for it is just an image in my mind saying "that". And I see that. It's where I'm at in time. I don't even know I exist. Forget all I said in fact. But it works, I mean we work. We do? I only see though. Oh yeah. You don't exist. The universe can't exist without an observer. It sees. I don't see. See!?

"I often wonder, what is the term for compression that is both lossy and lossless?"
When you got your sh** together and someone else says omg you stupid a** you look ugly af.
That's local optima. Local compression.
&
lossy and lossless = perfect and not perfect
you look good, but other person thinks you ugly
fractal limiting....you lose data but the same pattern is there
&
being both lossy and lossless is when you meet a criteria, when you lost data but you have enough to either get the data back or still meet a criteria
&
"or the act of forgetting some but remembering what's important"
Which is Destiny anyhow....there's 0 entropy loss in the first place.
When you forget stuff, it's ok, because everything is perfect and going perfect! Even your knowledge will fill in the cracks. And some loss doesn't even change thins.


I'll explain the major theory. I could go on and on about small things. I want you to imagine this vividly as a video of images in your brain: So the universe works by physics, mechanisms, at the particle level. It has been evolving ever since. Our solar system emerged (self-organizing property of physics), then the cell on Earth, multi-celled organisms, humans, eventually computers and software. The universe is changing its state, evolving, it is destiny and is predictable and can be simulated and everything self-emerges, and does so faster and faster exponentially until it reaches equilibrium state like waves in a box of water, where the entropy settles. Cycling remains however else there would be a frozen world/utopia in the future. Like water, the waves settle, but brownwinian motion remains. A planet becomes a star when it gains too much mass and it returns that mass/energy back by blasts by becoming a star, this is local cycling but the global environmetn is actually changing. The exponential curve is made of smaller such curves - when ex. cars are made better and better faster and faster they eventually reach a halt and require a totally new solution. At the end of evolution is the most advanced creature started from nanobots transforming the Earth and growing eating galaxies, this megastructure is most optimal on a global optima scale - a calculator+brain solves many problems but a brain alone can't work with diverse/big numbers, so you have organs that alone can't solve a problem in the fastest time (the utopia change to a new structure x or y, some take longer, some would mean its death (all metal)) or use the least amount of resources or is most intelligent at evolving fastest or can compress data the most, but globally it is optimally perfect on average. Local optima is an issue also for answering questions, sometimes you have to go back a step on the tree of possibilities.

Information starts off randomly like the big bang or water group of animals fighting or neural brain waves propagating but it synchronizes later (look up 64 metronomes in perfect synchronization) and the vibrations sync and can travel faster as a wave. Applications of this below. There's two types of randomness in physics, the kind that seems new where you take clock time and inject it into a video game (or see a new dress), and the kind (if exists) where a particle ignores its destined law and randomly moves somewhere else or something like that. We have simulated car crashes in computer as information without needing to simulate atoms, and we can predict them perfectly or change the result using randomess from clock time. Computers also can avoid the true randomness if it exist because they use redundancy (the same info replicated many times) and computers already use this technquie to be perfect and digital. Issues lead to innovation. Still to this day all wild animals will find their food/mates by taking families alive because they are indeed food/mates. Humans today do it in a civilized internet way, be it poor foreigners scamming or a friend competing on research work. We age fast because high mutation and breeding rates re-use limited resources and churn out the most intelligent and can overcome competition nearby in limited time. So: information evolves and churns itself using its own next state. The past predicts the future - the present.

Information/data in the real universe is evolving right now. Intelligence emerged in organisms and is really just physics. Brains are taught information, teach it back out and replicate it, repair it (forgetting/death is part of revisment using finite resources). They update close connections (friends) and not just neuraonal connections. The brain is actually a compressor of information (mainly images and text, which are both natural language - sequences of plans (can translate into actions by link association)) and will be general enough using past experience to make analogy to new Problems using old answers and will also learn new answers. The brain is a network hierarchy and heterarchy that builds bigger sentences/nodes from smaller alphabetical parts, and "cat ate the" can match up "dog swallowed our" because cat=dog, yet more re-use. The brain is born with rewards from evolution for food/sex and will transfer neurotransmitter reward signals to related goals that get food - money/jobs - food=money, and it will go down tree branch candidate plan sequences of action to take (senses associate to actions by link, a sentence is literally action). You will will talk to yourself or others if stumped and ask and answer "I will obtain food by getting a job / growing my own food" then you go down the next best branch "i will obtain a job by making the customers / my boss satisfied". You can refine not just the answer but the question - maybe growing food is better cost (for you). Words like food=money=job are all clustered in a neural network like Glove. It discoveries translations for words by seeing them in the same or similar context frequently enough compared to global population ex. "the cat ate food" "the dog ate food". It's goals can update. And it can generate the next word onto a sentence by looking at the end last 7 or more words and using frequency, and relation - "The cat was in the" - canidate 'cage' is high probability because it is seen nearby 'cat' often in a large amount of text (gigabytes). Like compresion, the final organism is most general in its abilities. They will think at te speed of light, move using light and nanobots, tons of memory, arms made of arms, can clone, instantly regenerate or flee, wirelessly communicate, control a nanobot from afar using a megabrain, transfer to new huge bodies, reverse time in simulations, the list goes on.

Just writing this text story I am making discoveries and re-organizing my fact parts better. Search Google for Open AI and look at their text generator - it's perfect and I'm not lying - try it by searching Google for "talktotransformer". It is a self-emergent property in physics using information and uses Self-Attention technique in AI to do the predicting of the next likeliest word. It works on images and music too. It is a general network architecture thay found. Brute force search can generate & verify ANY algorithm but costs too muich time the bigger the problem - you need hints, experience, data! But you need not random data, it'd be useless, you need patternistic data! Like 2 sentences that have 'cat' in therm, where 'cat' re-occures more than 1 time, or even 'c' re-occurs. General algorithms ones are efficient. Like Glove, the surrounding words let it know cat=dog and Glove could later say if dog=cat/horse and zebra=cat/horse then therefore dog=zebra - same as using the algorithm with many senses it can say if sound=image.... Also this helps it predict the next likeliest word "The [cat] was in the [cage]" by using mutli sensory. There's a pattern here, everywhere! The brain is a hierarchy and a heterarchy. Letters make up word parts, those make up words, those make up phrases...and so on. And Glove is a heterarchy, relations base the next higher relations. Lower nodes in a net are re-used. Lower cost=more general intelligence on unseen problems. Until the training on the network reach equilibrium. This hierarchy is a hierarchy of contexts, because the node "thank you" is part of many sentences in the same network. Regeneration works on all information with patterns: knowledge, DNA error correction or specialization or duplication, wound healing, person replacing. You really can remake anyone from the dead, even exactly, and clones. They're all you. And you can enter a simulation or new body. Every day we change at the particle level. No one is a single system really, I am mny things and part of a society hierarchy. We fight to live because its instict. Tables don't fight. However the universe may not exist if its all just different arrangements of particles - it started off random but ended up just in a new non-random state. But that doesn't make anything different. It's just particles, I'm no different than a glass of water. There would have to be an observer of the universe, in a brain, or else there is no one to actually see the universe. But an observer can't move things, only feel it. The universe works by laws and has no need for an observer nor can compute such, only to exist it must have an observer!

Entailment uses translation, segmentation, and summarization, They use each other, too. DNA, brains, and groups repair and replace others and do not fail if 1 node dies, they correct errors. They emerge on their own from physics, and re-generate using information to reach global optima solution. This is physics. Replication of cells, knowledge (plans), nanobots, results in optimal efficient maximal computation, memory, resistance against death on a global scale, etc. A brain has multiple attentions. Which sensory is yelling? Which goal is priority? What topic is trending in the last 2,000 words heard? (topic: physics lol). Attention is used to refine something. But its local attention. Global attention works in the background on things you don't focus on, all topics, fashion, trump, etc. This allows it to discover relations and entailments that may allow it to go deeper and unstump itself and make Wider analogy in a different domain of information to sync up the waves in a hierarchy of nodes to become one standing wave and focus, like a laser beam. Dreams take you on higher attentive adventures. Daydreaming is thinking.


Some sources are ignored/believed, an answer is often based on evidence+who the source is! How many people agree too. You must be open-minded and able to change answers as it hapens often in real life. You need lots of data and educatino on diverse topics so that the proven science can work - they all chew in on each other and unify even when 20% are crap/wrong. More context leads to understanding. Go with the flow indeed. However i'm a researcher, me not with the flow? Nope, I am....I know a lot and do a lot others do, and, we are ALL specalized and 50% not with te flow, some are farmers, some are designers, we are ONE SYSTEM ONE WORLD, all my house items are made by others lmao! However as with brain or group waves, domains start off random no patterns, but exponentially faster the waves unify and vibrate one way in sync, and so the nanobot clump with be fully general and do all - farming, science, fashion, factory work, not just one specialty.

When you show someone text, they'll likely read it. If you end with a question, they'll likely answer it. Interesting observations. Better if they want to and like the question/story.

I predict that when The One emerges and truly understands a lot of his/her work, he will talk fluently (and often) about it like child's talk able to summarize it and explain how to build it to anyone. I'm starting to find this is true with myself.


Interesting, it starts off with goal words, taught learns unsupervised, can ask others/do tests to verify answers, search for more data, it can ask itself its question on priority list plus add next word on likely probability list, but can be interupted by sensory attention depending on how important the question it asks itself is?


Lasers are powerful when you up your temp and talk a lot, don't do the opposite "yea the bits are there is it like a real tape but you may see it then".

Focus also means taking whole day out for your work. Seems counter but it isn't. Your mind remembers daily stuff in temp folder and deletes it in a few days. And, your next 20 years could be 40 years if you go x2 more work.

Lastly, uniting the beams (teammates) is crucial, this is higher end stage, high level node, we must refine the base nodes FIRST, we ain't ready. We're too random to sync up.


I work on AGI. To solve our problems life gives us. To solve death. And to give us unfathomable pleasures packed full of spicy action in a huge megaworld.
ranch
Oct 11 (2 days ago)
Actions
WHERES THAT PASSION COMING FROM!!!!!!!
I
ID
Oct 11 (2 days ago)
Actions
The Lord.
Well. Good question. I've tasted life and want more. I figured out I'm going to die and I'm planning all my time into working on a big plan, resisting all fun and valuing my plan instead.

You mean, use Lossless Compression as a new method for Congress to make decisions :P
Everyone can agree on hierarchical compression. It works.

Re
attention types
recognition
thinking
node cost
flexibleness
generalness
compression
senses=actions
language=images
story telling
satisfaction
generative
predictor using past experience
reward transfer
links
big data
big model
dropout
revisement/update of nodes/friend connections
unsupervised learning
parallel sequences
hierarchy
heterarchy
feedback scientific method
Self-Attention in Transformers
self-regeneration
physics, self-organization
propagation of waves in hierarchies aligned nodes
clustering by k-means
entailment
translation
sumarization
segmentation
task CHANGE by energy
AGI=simple/efficient
re-use of hierarchies
adaption
adjustment
update
revising
repair
replication
emergence
candidates
probability of entailment/translation

Lots of these are the same design. It's pretty simple. You can kick but and help us refine it easily if it's explained clearly. I'm working on a big movie and will explain soon.


@MM I had thought about that a few years ago. Once the first AGI is made, it will be faster ex. by 50x using light, never sleep (2x faster), never get tired or unfocused (faster!), have more memory, attention, sensors, data, list goes on and on actually. 1 AGI will seem like 10 brains, [maybe] 1,000. It will find ways to improve neural discovery and research.

But we DO need more of them. I realized nanotechnology is the answer because although humans will begin to manufacturer much much much more computers it just won't be enough in time (maybe!). You can fit more in and utilize resources and make more computer faster. The first 1,000 AGIs running on supercomputers etc will work on extending their processing and data intake using mass scale easily fabricated nanobots. They will tell us how / tell us how to give them bodies/actuators so they can do it. They will be less advanced nanobots and more focused on making more computer storage|processor. This won't eat up all the Earth exactly...but later it will. Later, the most advanced corpus of nanobots will be multi-flexible and will turn Earth into a final optimal form. This is the most efficient path. It makes their existence highly unlikely to become extinct while giving them extreme power.

If you want to see a bit of what such a God looks like, take a look at T3000 at the end of the movie Terminator Genisys.

It will be as amazing as the moment the first natural cell emerged on Earth :)
This artificial cell is much smarter though. It will replicate much faster like cancer. It will find more edible food than previously.


More context is better however you do also need summarization, so here:

4:
lots of non-random data (else no hierarchy)
goals
revisement (learning)
prediction (attention (iterative regeneration))

All 4 are the same thing....the prediction adds nonrandom data to its learning and updates its goals. Endless feedback and it talks to itself in brain.

With AGI, you need to teach it knowledge (text and/or images), it needs to have goals, it needs to generate solutions, then store that new knowledge and learn, update goals.

With nonrandom data in the universe, and a certain physics destiny, everything is working on its own under predictable laws. When you have data that proves cat=dog, this too is predictable and if your device is the right one - it will model predictableness. The universe is heading towards equilibrium, which means you end up with a megastructure that repairs and resists change, unlike all the devastation in the cosmos. Such instant-regeneration stems from a globally-optimal haven that is made of smart cells, some general and some specialized, including debree like metal parts in between. Equilibrium is instant-self-regeneration. Repair/revisement is the keyword. You fix, add more data, learn. You do need real-world observations and real-world implementation of storyPlans you dream in your sim model. Your predictions help add more for the lack of real world data.

All of evolution emerged on its journey through which new soup resisted change. Breeding fast resists change. Being a table doesn't defend itself or remake itself or repair itself. The Earth resisted destruction of many deaths, the cell did, the human did. They did come together on their own, they literally attract together - mass, mates. The self-organizing/emerging universe state ends up resisting change, until then, change will happen dramatically. Less change will happen near the end, but by that point the instant-regeneration megastar will be nearly appeared.

And you thought exponential evolution was just cus it was smarter. Lol. You think - ah, cell phones, more better data faster, more humans... But now I pinpointed it.

Anyone working on AGI is working on Rejuvenation/Regenerative technology for repairing. That is; self-emerging, self-organizing, self-attention, self-generation physics. As the world gets closer to the equilibrium in evolution - the end state, it will regenerate and emerge exponentially quicker, it will add a block back on it was missing, then once there is 2 blocks it can add TWO, then 4 once 4, today we add ex. 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000 every year instead of 1. The whole world will become nanobots and can instantly regenerate if half was blown up. It will then grow and eat planets.

"In 1964, James Lovelock was among a group of scientists who were requested by NASA to make a theoretical life detection system to look for life on Mars during the upcoming space mission. When thinking about this problem, Lovelock wondered “how can we be sure that Martian life, if any, will reveal itself to tests based on Earth’s lifestyle?”[8] To Lovelock, the basic question was “What is life, and how should it be recognized?” When speaking about this issue with some of his colleagues at the Jet Propulsion Laboratory, he was asked what he would do to look for life on Mars. To this, Lovelock replied "I’d look for an entropy reduction, since this must be a general characteristic of life."[8]"

Yes the universe is information, it has been evolving DNA and ideas on Earth. The universe is nonrandom information, there is statistical patterns and it is rejuvinating by negantrophy entropy thermodynamics. While yes there is just entropy happening here, at the end, it resists change and is repairing itself quickly a lot, so it is more like rejuvenation negentropy.

"all should share in common the attribute of being entities that decrease their internal entropy at the expense of free energy obtained from its surroundings. As entropy allows the quantification of the degree of disorder in a system, any envisioned lifeform must have a higher degree of order than its supporting environment."

"Entropy in psychology
The notion of entropy as disorder has been transferred from thermodynamics to psychology by Polish psychiatrist Antoni Kepinski, who admitted being inspired by Erwin Schrödinger.[15] In his theoretical framework devised to explain mental disorders (the information metabolism theory), the difference between living organisms and other systems was explained as the ability to maintain order. Contrary to inanimate matter, organisms maintain the particular order of their bodily structures and inner worlds which they impose onto their surroundings and forward to new generations. The life of an organism or the species ceases as soon as it loses that ability.[16] Maintaining of that order requires continual exchange of information between the organism and its surroundings. In higher organisms, information is acquired mainly through receptors and metabolised in the nervous system. The result is action - some form of motion, for example locomotion, speech, internal motion of organs, secretion of hormones etc. The reaction of organism becomes an informational signal to other organisms. Information metabolism, which allows to maintain the order, is possible only if a hierarchy of value exists, as the signals coming to the organism must be structured. In humans that hierarchy has three levels i.e. biological, emotional and sociocultural.[17] Kepinski explained how various mental disorders are caused by distortions of that hierarchy and the return to mental health is possible through its restoration.[18]
&
The idea was continued by Struzik who proposed that Kepinski's information metabolism theory may be seen as an extension of Brilluoin's negentropy principle of information.[19] In 2011 the notion of psychological entropy has been reintroduced to psychologists by Hirsh et al.[20] Similarly to Kepinski, these authors noted that uncertainty management is a critical ability for any organism. Uncertainty, arising due to the conflict between competing perceptual and behavioral affordances, is experienced subjectively as anxiety. Hirsh and his collaborators proposed that both the perceptual and behavioral domains may be conceptualized as probability distributions and that the amount of uncertainty associated with a given perceptual or behavioral experience can be quantified in terms of Claude Shannon’s entropy formula."

Yes, in evolution of energy stabalization end state, the rejuvination requires information self-attention, eating, digesting, creation of new refined generations of dna/data. You eat, you cum. High throughput means advanced creature.

"At some point, organisms normally decline and die even while remaining in environments that contain sufficient nutrients to sustain life"

Yes I was going to write actually: Some humans want to never be immortal, some want it, some want other things. This is normal. But globally the world is heading to a state of full regeneration or destruction.


Just few days ago I typed up a huge paragraph, lost it, and got it back by retyping it (it was in my nogin's memory). Was quite happy, and surprised! It wasn't perfect same way said but only 5-10% data was lost.


Robber's Rules
Steve R
I am helping a friend get ready for a million-dollar mediation - and we are wrestling with a complex issue that appears to be mathematical in nature, akin to the Prisoner's Dilemma, and possibly a missing piece of AGI.

The situation is complicated, but in a way like Israel or Ireland where two groups think they own the same thing, so they get together to discuss how this might be unfairly divided between them. My group sees the other as robbers who have acted fraudulently to secure their position, while the other group has papers in place giving them effective title - but with a 20-year wait to get anything. The mediation is how to divide up the money now, with some dangerous but uncertain leverage to ruin the robbers in court if they don't act reasonably.

This seems to all boil down to “robber’s rules”. Why don’t robbers routinely kill their victims and strip them of their valuables? This is addressed in Adventures in Arabia  by William Seabrook. There are several reasons – that all seem to sort of apply here:
1.     Other robbers will see killers as being without principle, and so won’t trust them to fairly divide the booty. Therefore, it is more profitable to first kill the prospective killer – instead of the victim.
2.    Blood is SO messy – when simply the threat of death can probably accomplish the same thing.
3.    If you don’t leave your victim with SOMETHING he might perish, and his death would be blamed on you.
4.    If you are too greedy, others will hear about it and mount a posse to come after you.
5.    If he has powerful friends, this could result in your own death.

In a real-life incident described in his book, the author was accosted out in the middle of the dessert by a band of bandits. He produced a note written in Arabic he had been given to address such situations. The robbers carefully read the note – and sent him on his way without robbing him. How could any words possibly have turned such a situation around? His next goal was to find out precisely what the note said…

I once had a related incident, where in high-school I was accosted by a gang of 5 teenage switch-blade-carrying delinquents – very much like the last scene in Westside Story. I was able to walk away uninjured. I starting by challenging their leader…

I would think that SOMEONE has studied this sort of thing in the past - does anyone here know of such a study?

Mediations seem SO much like ball squeezing contests. So, what is the winning strategy?

With no agreement my group gets nothing, and the other group must wait 20 years to get it all. With an agreement, we cut this baby in two according to agreed upon percentages.

There seems to be two camps:
1.  Demand 100%, or else Russian Roulette in court with maybe a 50:50 chance, and
2.  Divide it in half or ???

There will doubtless be head games, Mutt and Jeff setups, etc., as this thing unfolds.

I posted this here because SO much of what people here expect an AGI to resolve are disputes much like this one.

Thoughts?

Steve Richfield


My reply
The Devil seeks to rob, kill, and destroy.

"The thief comes only in order to steal, kill and destroy; I have come so that they may have life, life in its fullest measure. John 10:10 (CJB)"

https://www.biblestudytools.com/john/10-10-compare.html

You can see in nature (even to this very moment) a lion will enter an animal habitat and eat the whole family alive as food source (usually they corner a young one, but house robbery can entail whole tribes). I have witnessed this in youtube videos. They will even take their mates (some 3rd world people will kill families and rape the women and use the children as slaves). I noticed animals seem to always pull out the fetus out of the womb sack first and run with it once they find it and bring it to their tree far away (3 times, it was a distance of house away), they eat it because they are starving.

Just think, if you rob your neighbor, you can go from hobo to millionaire, husband, and parent, all in a day. You get a car, cash, identity, computer data, and so on. So long as you look like the husband, you're good to go.

Above is interaction between poor and middle class. Sometimes you get middles class and high class interaction. Sometimes people GIVE, and are seen as angels. This is true, like the poles on a magnetic. Parents give to children more than they can return.


Steve R
Stefan,

"What a f...ing winp, who needs 4 armed guards just to walk around. You gotta be the biggest wimp there is".

"Hey someone, throw him a knife"

"Only wimps, women and sissies need weapons. I wanna grind your face into the concrete with my bare hands".

With that, he folded up his knife and put it into his pocket.

I then spit in his face and bitchslapped him as hard as I could.

He lost it and came at me like a cat with fingernails extended.

Then for maybe a minute I was bullfighting, stepping aside as I tagged him each time he made successive passes. I had relaxed as much as possible to conserve my energy.

Then he settled down and decided to really fight, but he had already dumped and blown his adrenaline. After a few punches that confirmed his exhaustion, I unleashed a barrage of 2nd knuckle punches to his face.

He blocked with his fists, but 2nd knuckles go easily between fists, so he opened his hands. I probably broke some small bones in his hands 

He then turned away from me, swinging uselessly around his sides at me.

Now, his gang was laughing and they came running to rescue him.

Hiding my own exhaustion, I looked at the gang and asked "Does anyone else wanna play?" but I got no answer, so I walked away 

I had won mostly because my opponent was pursuing two other goals besides beating me - impressing his gang and satisfying his anger, while my only goal was to stay alive.

I made it look like I just played with him, then finished him off, though that was NOT how I saw it.

I hid my martial arts skills by converting from fist to 2nd knuckle at the last instant before contacts - which fortunately no one noticed.

The gang didn't then attack me because it looked like I had beaten their leader 100% fair and square, though I had "cheated" on plain sight.

Yes, my ears were ringing, I could taste a bit of my own blood, I was a bit dizzy, etc., but this had worked out perfectly.

Steve


My reply
If a robber is near or attempts to rob someone's building, this involves force and involves the owner use defense also in retaliation. The force can be positive and mutual in Game Theory if both sides cooperate instead. Maybe Prisoner's Dilemma. However its unwise to cooperate in that game unless conditions change.

Today humans rob ideas. It's all about sharing. Things die and are updated and re-used. Giving up and not hoarding your ideas, technology, shelter, food, wife can speed up evolution. And others can adjust them to improve them and their next generations. Yoyr artificial IDEA rewards r what you make them, you 'think' sharing is bad or x is bad. But it may be actually anti-harmful! Not death but saves lives!

Things start random but synchronize later and cooperate instead because they are Aligned. These tribe members still share, ideas still die hard. Random Brute Force still persists. But it's getting smarter. Communication is happening. The ratchet is getting louder.

Today, humans don't attack others to eat them because they aren't hungry and have plenty of food at home. The costs outweigh the danger. Evolution churned lives into new lives and rid wimpy people, today we grow food and get our own homes and wives (why hoard wife? So u breed and I don't?), ideas are the new hunting season, evolution must have thought "I'll upgrade humans, and then when they get plenty of food etc, they'll still have to work for it anyway wahahaha". Basically instead of finding food, they find money, or ideas. Humans still fight but they just make ideas the fighters for them. Ideas initiated large food production, home production, etc. It's like a domino, it's the same thing. I mean, large food production and a mate is all we need, why do we all still work so hard? Our homes have to be better than before? Is this greed? We all need TVs, etc? Or maybe lower domains attack these fine families, resulting in constant problem solving. We take to the ideas, not food. Some know they age and will die, so they are not happy until they are immortal. So instead of robbing for food they rob for ideas. Ideas are the new revolution.

Back in the jungle we had all we wanted...banana trees, mates, huts....evolution has been evolving information - DNA, ideas.


OpenAI onm the rubiks cube solved by the arm:
"The biggest challenge we faced was to create environments in simulation diverse enough to capture the physics of the real world. Factors like friction, elasticity and dynamics are incredibly difficult to measure and model for objects as complex as Rubik’s Cubes or robotic hands and we found that domain randomization alone is not enough.

To overcome this, we developed a new method called Automatic Domain Randomization (ADR), which endlessly generates progressively more difficult environments in simulation.[2]
Our work is strongly related to POET, which automatically generates 2D environments. However, our work learns a joint policy over all environments, which transfers to any newly generated environment.

 This frees us from having an accurate model of the real world, and enables the transfer of neural networks learned in simulation to be applied to the real world.

ADR starts with a single, nonrandomized environment, wherein a neural network learns to solve Rubik’s Cube. As the neural network gets better at the task and reaches a performance threshold, the amount of domain randomization is increased automatically. This makes the task harder, since the neural network must now learn to generalize to more randomized environments. The network keeps learning until it again exceeds the performance threshold, when more randomization kicks in, and the process is repeated.

One of the parameters we randomize is the size of the Rubik’s Cube (above). ADR begins with a fixed size of the Rubik’s Cube and gradually increases the randomization range as training progresses. We apply the same technique to all other parameters, such as the mass of the cube, the friction of the robot fingers, and the visual surface materials of the hand. The neural network thus has to learn to solve the Rubik’s Cube under all of those increasingly more difficult conditions.

Domain randomization required us to manually specify randomization ranges, which is difficult since too much randomization makes learning difficult but too little randomization hinders transfer to the real robot. ADR solves this by automatically expanding randomization ranges over time with no human intervention. ADR removes the need for domain knowledge and makes it simpler to apply our methods to new tasks. In contrast to manual domain randomization, ADR also keeps the task always challenging with training never converging.

Analysis
Testing for robustness
Using ADR, we are able to train neural networks in simulation that can solve the Rubik’s Cube on the real robot hand. This is because ADR exposes the network to an endless variety of randomized simulations. It is this exposure to complexity during training that prepares the network to transfer from simulation to the real world since it has to learn to quickly identify and adjust to whatever physical world it is confronted with.

The robot can successfully perform most flips and face rotations under all tested perturbations, though not at peak performance.

Emergent meta-learning
We believe that meta-learning, or learning to learn, is an important prerequisite for building general-purpose systems, since it enables them to quickly adapt to changing conditions in their environments. The hypothesis behind ADR is that a memory-augmented networks combined with a sufficiently randomized environment leads to emergent meta-learning, where the network implements a learning algorithm that allows itself to rapidly adapt its behavior to the environment it is deployed in.[3]

More concretely, we hypothesize that a neural network with finite capacity trained on environments with unbounded complexity forces the network to learn a special-purpose learning algorithm since it cannot memorize solutions for each individual environment and there exists no single robust policy that works under all randomizations.

To test this systematically, we measure the time to success per cube flip (rotating the cube such that a different color faces up) for our neural network under different perturbations, such as resetting the network’s memory, resetting the dynamics, or breaking a joint. We perform these experiments in simulation, which allows us to average performance over 10,000 trials in a controlled setting.

Our method currently solves the Rubik’s Cube 20% of the time when applying a maximally difficult scramble that requires 26 face rotations. For simpler scrambles that require 15 rotations to undo, the success rate is 60%. When the Rubik’s Cube is dropped or a timeout is reached, we consider the attempt failed. However, our network is capable of solving the Rubik’s Cube from any initial condition. So if the cube is dropped, it is possible to put it back into the hand and continue solving."

My idea: 
"Just realised the constant wiggle is also for information"
i bet it even uses it as motion to move the cube hehe
@Ankit Arya it sees cube, knows raising wrist will PROBABLISTICALLY/LIKELY move cube in desired way, and while action y may as much, x will get much more info (random sensory, likely some helpful).


Wait guys......wait guys.....
Remember K showed us the 64 metronomes that sync up? That was amazing. What happened? You had 64 sticks going back-n-forth at different times ON TOP a platform that can slightly move, like random brownwinian motion in an apple. The random sticks synced up and were no longer random motion! It was all the same direction!!!!!
&
What would happen if we put a sphere of water in space? The heat in it is really just each atom moving its own direction and the tension of the liquid holds them together (boiling water grows in volume and spits off blobs (that's motion!!! Not heat!!!)). Would the heat sync up and make the sphere move in one direction and not age anymore? It should according to the metronome observation.

Further, we can already do cryonics on single cells or small clumps, but larger mammals don't. This may be because we need to get you to frozen point after a ultra slow cooling process - a year or more to cool down to allow EQUILIBRIUM of the heat to settle and not be unevenly distributed more in bones etc.
&
Proof: No matter if each atom has x number of photons in it ex. 167 trillion or an amount with infinite quantinization ex. ~167 trillion, you should be able to take away heat and have the system simply age slower. You have the same atoms, just less speed. But what happens when an atom must bounce off a vessel wall and sticks now? System is not just slower, but different, oh no. Well, metal in your body is also getting harder now as you get colder, it's possible most atoms will stick at the same time. And not have water freeze first. Ok, what about shrinkage of the system's volume as it cools, it's not just slower... Well, you can compare a thermometer here also btw with red dye, it shouldn't rise the tube (unless that's OK), so, how?, hmm, the fluid shrinks as less random motions is in it........ need some tests.....
&
Let's think. If we can make photons in a system of atoms not random and equalize the motion to sync up, we may rid ageing. It just ages slower like time slowed down by freezing you ultra slowly over a ex. year (you may need to be anesthetized to not move). But could less photons cause the motion of the atoms to change unevenly or shrink so much that a blood cell can't pass a small vessel anymore? The random motion may actually cause the system to function different - no shrinkage and no sticking unevenly.

What about first heating yourself up? By the time you get back to normal randomness, you also have extra wave motion going on now. However we want to make the heat you do have into motion and/or remove heat. You could not remove any heat and let your body start moving by syncing, and/or you could remove heat veryyy slowly and hope that equalization is all you need to gain no damage.

Wait.....The whole Earth would equalize all heat, shrink, and move 1 direction.....except the sun adds new heat all the time....so does the Earth core...
&
Another issue with the metronome syncing concept is how can it be so if 2 particles hit into each other and bounce back in opposite ways.....how do they SYNC?? Look > /\ = \/......not =....the 2 atoms would not end up going same way in a vacuum box.........hmm what if we had a million though......one goes up to right a tad and one hits it to go rightway now.....maybe but still need a reason that clearly states WHY..............

Wait....................if it is true to what I believed.............I should be able to run a computer simulation of metronomes..........
&
Further, I should be able to run a computer simulation of 1,000 atoms balls bounces their own random way (expanded balloon from heat) and get the balloon to shrink and move left or right only eventually because the balls bouncing should knock each other into sync

also........why they go OUT of sync once beer cans are removed!?:
It was a lie this whole time...............................I bet the metronomes arn't even..they just cheating using electronics

Yes, i know that, but, the idea is in physics here that nodes sync up....from randomness..........why doesn't hot air in a a box start moving the same direction hence making the box shrink from less heat and start moving ex. left. Or can it? Like a 100 bouncing balls in a sim in a box, start moving the same direction and push the box instead of being destructive heat making the box inflamed and explode/rip apart.

So photons in a box of mirrors all align and focus and shoot out as a beam as I expressed above?

But its artificially induced.......we're talking natural effects here.........a box of hot air only, no lasers or turbines lol. If the metronomes and brain synchronizes waves of straight motion to propagate forth/back/forth/back, random hot air molecules should as well align on their own in a closed system where the effect is stronger than surrounding heat (ex. a very large sphere containing 1,000 large balls the size of your thumbnail, instead of atoms.....in space in vacuum.). The result would be heat loss and motion gain, just like how the metronomes began to rock the cardboard platform.
&
If someone can simulate the metronomes in a physics engine, they can attempt this as well.


https://www.youtube.com/watch?v=fakasKReEyg
Theme 0:00
Mysterious Life-Form Nearing 1:13
Captain Olimar Captured 2:16
Mysterious Life-Form Recovering 3:00

Generative Models. Rejuvenation. Negentropy.

Evolution gets faster near the end. Exponentially.  At the end it resists change. I'm not talking about the grand splash where heat death results in only outgoing resources and no further incoming meteorites or whatever, I'm talking about the technological final form Earth is shaping into, a globally optimal god made of nanobots, like Terminator T3000 in Terminator Genisys (see end of the movie). This god, is the most optimal structure of matter and energy, it can only grow in size now. This is physics equilibrium. It resists change, it has settled. There will (hopefully) remain lots of flexible Brownian Motion (else the utopia would be almost frozen dead of entropy). Why do you think evolution gets faster near the end? Why do you think this god resists change and seeks only to grow in size? Let me give you a hint - if it was severely wounded (45% blown to bits), it would near instantly regenerate back into form - like a faster, smarter surgeon able to replace an organ with a transplant much quicker. It's heading to the same state again...it's simply repairing. And it does a better job when near fully emerged.

Intelligence=Physics. This is the key.

Some humans want to never be immortal, some want it, some want other things, some suicide. This is normal. But globally the world is heading to a state that regenerates itself. The end of evolution on Earth, the final destiny in physics.

Anyone working on AGI is working on Rejuvenation Technology. Self-emergence, self-organizing, self-attention, self-generation, physics, negentropy are the keywords here. Repair. The universe has nonrandom information and patterns are learnt that allow it to add missing structures. The universe, DNA, and ideas are all information and are evolving. Iterative regeneration leads to self-learning, goal learning, predictions, repeat. After repeating, attention on information is self-refocused. This includes nearby friend nodes in the nanobot planet, it will updates friends through connections.

Check out Open AI's website and search Google for "talktotransformer" to try it. Look up Self-Attention illustration of the Transformer architecture. It's general and can regenerate text and images and music BACK. - It does translation etc - It's a multi-tasker also and is an unsupervised learner. It can scale too. In the near future, I predict it will use not just context of the text sentence, but images also, to predict the next frame and word. This is a pattern - Glove does this already. Here it would be working on multi-sensory data, and lots of data also. 

With nonrandom data in the universe, and a certain physics destiny, everything is working on its own under predictable laws. When you have data that proves cat=dog, this too is predictable and if your device is the right one - it will model predictableness. The universe is heading towards equilibrium, which means you end up with a megastructure that repairs and resists change, unlike all the devastation in the cosmos. Such instant-regeneration stems from a globally-optimal haven that is made of smart cells, some general and some specialized, including debree like metal parts in between. Equilibrium is instant-self-regeneration. Repair/revisement is the keyword. You fix, add more data, learn. You do need real-world observations and real-world implementation of storyPlans you dream in your sim model. Your predictions help add more for the lack of real world data.

All of evolution emerged on its journey through which new soup resisted change. Breeding fast resists change. Being a table doesn't defend itself or remake itself or repair itself. The Earth resisted destruction of many deaths, the cell did, the human did. They did come together on their own, they literally attract together - mass, mates. The self-organizing/emerging universe state ends up resisting change, until then, change will happen dramatically. Less change will happen near the end, but by that point the instant-regeneration megastar will have nearly appeared.

You thought exponential evolution was just because we were  smarter. You think - ah, cell phones, more better data faster, more humans... But now I pinpointed it.

As the world gets closer to the equilibrium in evolution - the end state, it will regenerate and emerge exponentially quicker, because it is almost here, it will add a block back on it was missing, then once there is 2 blocks it can add TWO, then 4 once 4, today we add ex. 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000 every year instead of 1. The whole world will become nanobots and can instantly regenerate if half was blown up. It will then grow and eat planets.

The larger it is, the more it can repair. The faster it can move. If you have a planet where half is only soil and grass, it is not as fast or has as many workers. So full utilization AND eating nearby planets results in faster better rejuvenation.

The universe starts as mostly random data, and through informational patterns it is able to predict and re-generate itself back. Synchronization of cells, feature nodes, agents, at all levels of hierarchy, increases as evolution happens. They become aligned because of patterns in the data. Just like rocks in a bucket, metronomes on a platform, words in Glove, religious doctrines where yo grow up, they equalize with each other and sync up to a similar state and allow propagation through a hierarchy or heterarchy so it can build higher up nodes - like a pianist can easily. These vibrations are also brain waves. Random domains in a brain or country are present even in older-age equalized ones. Synchronization i like Attention, they all look at each other and prepare for Translation, to adapt an get better decodings/encodings for this layer.


Alex, the first cell emerged on Earth 3.8 billion years ago, today we have cities full of humans. The change that happens each day is always a lot - air blows around, soil churns, atoms jiggle, energy transfers. So what is the difference between 3.8 billion years ago and tomorrow's change? Negentropy - that is the difference. And more negentropy happens each next day than the last day. Today we are much closer to the final form of Earth than 3.8 billion years ago. The final form of Earth will be one that is quickest to re-generate back to the final form if 45% of itself with blown to bits by a huge disaster. It's like a monkey compared to a educated doctor - who can replace their broken organ faster? You know yourself that possibly within 100 years the whole Earth will transform into a nanobot supercreature made of the artificial cell. That's a LOT of negentropy blown away very fast, it's a very powerful system, and it can repair anything very fast - broken wall - no problem. We are reaching this supercreature exponentially soon, lots of good change will occur and the whole Earth will transform massively, it will be able to repair things ultra fast, and that is exactly why evolution is so quick near the end, the amount of repair/rejuvenation doubles each day. The thing (Earth) adding back itself is able to add back MORE every time it adds back more of itself. A raw example is more humans=more regeneration of the supercreature. Humans made machines and eventually  bigger computers and software. This information, not our DNA, is now evolving. This information is re-generating itself through us manipulating it using unsupervised algorithms and our own brains too still. Just like more humans meant more advancements, more and better data means the faster the supercreature emerges. The core idea here is Earth is reaching the globally optimal form - an advanced godCreature, it may slow down near the true end but we will be quite happy when the curve shoots up fast soon. It re-generates itself on its OWN, and then repeats that step lol. After each step, it gets faster at emerging. The best example is more humans on Earth, and making each work faster (ex. through electrical computation over chemical based signals).
So, evolution gets quicker near the end (think of Earth becoming the creatureGod) because of self-regeneration, self-emergence, self-attention architecture, self-organization. Like ANN training, there is still randomized domains like bugs to this very day, but the domains are aligning and synchronizing like brain waves propagating across structured layers of hierarchy, stronger connections are being made and more cooperation occurs than lions fighting solo.
Evolution is quicker near end because the self-organization/self-regeneration on Earth allows for double the self-regeneration, it replaces nodes with faster ones, not just duplicates them.
it does that on text information too - first it finds better translate encodings/decodings using Self-Attention, then does entailment adding on-topic similar words
cat was in the [cage]
The more context you have, the better the disambiguation of other words....i.e. the more added words to the story result in better translation of each word
1) All works by physics.
2) Evolution on Earth (until gets you nanobotGodPlanet) is exponentially faster.
3) Evolution on Earth is self-organizing, as said, physics.
4) The reason it is exponentially faster is because the closer it gets to the globally optimal form, the better (hence faster) it can repair and settle into that equilibrium state!
5) If damaged, it can repair and go BACK to the optimal form again quickly - 45% of the superorganism can be blown to bits yet it can pick them up and reach the final state very quickly (it won't take 45% of the time evolution took).
NEGENtropy=the amount of difference between where we are currently and the final state
"Negentropy is reverse entropy. It means things becoming more in order. By 'order' is meant organisation, structure and function: the opposite of randomness or chaos. One example of negentropy is a star system such as the Solar System. Another example is life.
As a general rule, everything in the universe tends towards entropy. Star systems eventually become dead. All energy has gone, and everything in the system is at the temperature of the surrounding space. The opposite of entropy is negentropy. It is a temporary condition in which certain things are hotter and more highly organised than the surrounding space. This is the second law of thermodynamics:"
It's not out of the barrel yet though, no one can say for sure the universe will lose negentropy. It may allow immortality.Stars grow in mass and hog matter and energy, don't they, but they give it BACK when they get too big and ignite into stars and give off heat bursts :)


"We have to enter knowledge manually in the beginning, there is no way around that. You do the same thing with 1-year-olds too after all (talk to them in easy sentences). At some later point, we can start feeding it information from existing sources."
Good point, and mentorship is the faster way - i could teach a baby all i know by age 10. However we can't take 10 years being "Father", we gotta make these things MORE advanced than humans. They should be called ASIs :). They need to eat lots and lots of data on the internet. It can talk to itself fast of course, but that provides only limited bandwidth for research. It can however seek data after long bouts of thinking.


Me & Alex Facebook:
Alex: Why do you like using words that you don't understand
Alex: Do you think it makes you sound smarter?
Alex: Can you define negentropy first before using it 20 times
11:18 AM
NEGENtropy=the amount of difference between where we are currently and the final state
ah see:
"Negentropy is reverse entropy. It means things becoming more in order. By 'order' is meant organisation, structure and function: the opposite of randomness or chaos. One example of negentropy is a star system such as the Solar System. Another example is life.
As a general rule, everything in the universe tends towards entropy. Star systems eventually become dead. All energy has gone, and everything in the system is at the temperature of the surrounding space. The opposite of entropy is negentropy. It is a temporary condition in which certain things are hotter and more highly organised than the surrounding space. This is the second law of thermodynamics:"
Simple Wikipedia to the rescue
i knew about both
Alex: well it's also wrong
Alex: and pseudoscientific
Alex: no such thing as negentropy
hehe
you agree evolution on Earth is exponential?
Alex: no
Alex: I already explained
Alex: it's not
any year we see the nanobots eat earth and make the optimal technological form though.......
we already have human history record that things are moving faster these days
Alex: are you dumb or what
Alex: ??
Alex: https://cdn.britannica.com/s:s:300x200/30/199830-004-6B2D1A0D.jpg
Alex: you aren't paying attention are you
Alex: more than 90% of all living cells are either plants or bacteria
Alex: are they evolving exponentially faster?
good one
no
Alex: then how can I agree with "evolution on Earth is exponential?"
Alex: when for 99.9% of the species it isn't
Alex: maybe it is but it's really slow
Alex: something you wouldn't call exponential in practice
Alex: only in theory
Alex: only for 0.1% or less
it may be that by the end, only 1 single nifty scientist (not his team of 5) creates the nanobot swarm that terraforms all the Earth....
Alex: it happened to be fast
90% of humans don't work on science either i think
Alex: yeah you're very edgy
?
Alex: for example you
just because so many cells emerged and are all just sitting around isn't much of a point really......our progress is exponential, definitely within 1,000 years all of Earth will be nnobots ASI if you get me....the optimal form of this matter that is best at keeping its Persistance against attacks (by repairing fast)
we already know technology/evolution is exponential.....this is obvious....
the great nanolord is close
Alex: look
Alex: negengropy isn't a think
Alex: it's pseudoscience
things self-organize though
and
they have an optimal global form
an equalibrium
Alex: second law of thermodynamics cannot be violated
entropy?
Alex: yes
well, we may all die indeed, but the Earth still is able to become that cool optimal god-tech form soon
Alex: now it all is starting to make sense
Alex: god, negentropy
btw planets hog matter and energy and become stars and give it back as heat bursts. It's possible the lost energy/matter will come back in some way
Alex: you're speaking theology
Alex: ok I need to go now
ok


My friend was arguing I'm wrong, saying 'negentropy' is a pseudoscience term and I'm talking Theology (God+pseudoscience). And said 99% of cells are plants/fish/bacteria and are not evolving exponentially faster, evolution is NOT exponential.
All wrong. What he failed to realize is that the matter and energy of Earth self-organizes on its own until it reaches a global optimal form (technological god) and reaches the final equilibrium state. Cells replicate and terraform Earth, but aren't great food finders yet. Nanobots will turn atoms into other types of atoms and spread like cancer. That 1% of cells he mentioned that are evolving exponentially faster are humans, and only like 5% of humans are working on science and making advances. That 1% is the number that shoots up to 100% very fast, when the nanobot replicators emerge. The optimal form of Earth is a god, and it can repair itself (self-organize) very fast and reach the global optimal form again if damaged. We already know that our scientific human's progress is advancing at an exponential rate. The repairing is getting faster near the end. The final optimal state will be the technological god that is sitting at the exponential end from mass cells replicating fast and can resist damage/death (change from equilibrium) extremely well (except for possibly entropy)


Huh? That makes no sense really......

Humans are nodes....and most of Earth will become nanocell nodes

"Teleology" is that something has a purpose. Well, we can only say why the Earth is approaching an utopia (to have great pleasures for the observers), there really is no reason/purpose at all, the Earth's organization is JUST happening, I'm just a machine. We can say we think why but we can't test it. "Teleology" doesn't exist in the soul-sense, only our every day tool-'purpose' sense. We can not test it for the soul-purpose sense...we can only test real machinery. In the testable sense, Teleology is just evolution...everything has purpose, even the rocky mountains sitting dormant for millennias. It's just physics.
I
immortal.discoveries
Oct 18 (23 hours ago)
Actions
It's tempting to think centipedes, humans, etc are 'people', but we are many things, we even comprise the Earth System. I am just a molecule/part in the clock, begging to live. I'm getting a little scared though looking at history, we may need 500 years before we get this rejuvinator god, we better think faster and team up tighter soon.
NEW
I
immortal.discoveries
1:43 PM (6 hours ago)
Actions
With Earth, it'll evolve into an approx. optimal final form by self-organizing physics. The final state is the same, no matter the starting conditions? If this is robust general Self-Regeneration, it should be so. Let's take a computer simulation, a ball starts displaced and moves towards the bottom of a silver cone ditch by gravity force. No matter if we place some pinball bump nobs on the slope, it will reach the bottom hole and sit there at equilibrium!

Although its possible there's true randomness in particles and not just apparent random behavior, it may be that our movie/destiny is set from the start and is only playing out. However now we know it is indeed a general self-organizing physics.
I
immortal.discoveries
1:46 PM (6 hours ago)
Actions
Another example - trillions of randomly placed water molecules sit in space and are attracting to each other until we have a spherical blob of liquid, a planet of water.


We replicate to spread.....food sex........you can't stop short-cut finders.........they will take all your food, eat you too, take your wives, use children as slaves, and advance...................We can indeed make organisms way more advanced than humans, we are actually nearing this!................ideas replicate too, we already share what we learnt, tools too..............we do the way of least action already...................physics is self-organizing............nanobots are coming


--Alan says text processing wont make the AGI--
If you do what GPT-2 does, it can give you plans, but the human must act them out. The text is just data, if you add images, it's just more data involved in Self-Attention. That'd result in a visual|text GPT-3 that has simply more context.

Did you see MuseNet? It predicts the next note for each instrument, using all notes from each instrument.
I
immortal.discoveries
5:02 PM (2 hours ago)
Actions
If you hear a creek, 'door' is more likely in the next frame lol.
I
immortal.discoveries
5:03 PM (2 hours ago)
Actions
(visually a door)
I
immortal.discoveries
5:05 PM (2 hours ago)
Actions
'thoughts' in your head, about cause>effect, are just images you were taught, lotssssssss of data! It lets you dream, make plans, and test them out. Internet sentences are copies of videos in a summarized way, and just need linking to motor and you're god to go!
I
immortal.discoveries
5:08 PM (2 hours ago)
Actions
>>We need to expand the team at OpenAI<<
I
immortal.discoveries
6:58 PM (24 minutes ago)
Actions
Actually, our visual cortex processes images and images contain text. These objects like 'CAT' link to 'the real image of a cat'. Text words are just the same thing, but quarantined. So text on a computer is also just visual objects, even though it doesn't seem like it.

Besides, text vs images, its all 1s and 0s - data, with synonyms and entailment. Same patterns > image of "cat eating milk" VS the text "Cat eating milk."
I
immortal.discoveries
7:01 PM (21 minutes ago)
Actions
Further, if you work with just images and NO text, you still have useless AGI. Someone needs to implement orders (sensory links to motor).
I
immortal.discoveries
7:04 PM (18 minutes ago)
Actions
You give an example:
"YOUR SOFTWARE MUST BE SMART >> SO THAT << IT CAN USE LANGUAGE."
Here, our VISUAL CORTEX is recognizing the likely Next Word, like GPT-2 does, no matter any artwork in-between like lines, bolding, colors, cut up paper, ink spills, etc.
I
immortal.discoveries
7:13 PM (9 minutes ago)
Actions
Actually same for text too. Also, sorry, I know its 'one' sentence and not 2 translations surrounding an island because I get too many flows of entailment at each 3 segmentations, and/or most of it matches something I already know.

So when I read it, it recognizes it and stores it as entailment, not explicit translation learning on the fly. Part of that recognition is due to most keys being activated, I think that's it actually. Hmm.
I
immortal.discoveries
7:48 PM (2 minutes ago)
Actions
On Saturday, October 19, 2019, at 7:36 PM, Steve Richfield wrote:
The problem with dropping AGI into this is that this would come at the beginning of their history, whereas humans have been evolving as writing has been evolving.

That's not a problem...a baby grows up and learns a language from current data. Which includes both new and old facts, even old language data if it didn't die.


Re: Robot hand with a Rubic's Cube
« Reply #1 on: October 19, 2019, 06:13:12 PM »
I wonder if they will try it now (with the same algorithm) on hamburgers....construction of a factory-made device (Rolex watches)...separating items....etc. I mean, it'd train for a new task I'd think, it'd have a goal told to it (flip/move x to y location) and try to see it with its eyes, then be general enough to do it in many circumstances.

You wonder if they will combine GPT2 + this + the red/blue-hide&seek thing OpenAI made shown on their website. It's very tempting ideas! This really looks like the way forward (lots of data/randomization/model, self-attention, brute force with hints on certain levels (skip atom/run simulation training), goal transfer).


AI safety is dependent on:
1) If an evil actor makes it (developed in the US military labs....great....it's goal is shooting foreheads, wait until it ends up in your country by mistake!). Ban the military.
2) If a good actor makes it but fails to initiate it properly. The best way to ensure safety is to not argue with it, but to initiate it correctly. After that, like any child, you still need to raise your kid until he can take care of U.

To initiate it properly, you give it good goals. It can achieve them in various ways, maybe I can get mom that gift if I rob a bank...but it should reason it could die and others could die from guns or shock. Any solution it comes up with should clearly have a cause-effect motion, and decide the effect is dangerous. It may think, blow up bridge, then cars fall, people are in cars, they die. It has to question through Tasks like entailment/translation/summarization if the main goal or related goals are threatened (lives, money, food, video game stash, fetish doll, all items safe? Check.). It should know diamonds, tall building etc are higher value. This is the only way to teach it what to do and what not to do, there's too many sentences that could play out!

What I suggested was a transfer of goals in a GPT-2, just like its synonym understanding of meanings for words. You teach it food/sex/immortalityOfTheSpecies are the strongest goals, then it spreads to tall buildings, diamonds, cars, animals etc. Then, when it thinks of plans, it questions the effect and if it harms any values. It's just one big GPT-2 that equalizes connections and does plan generating.
I am impressed with the arm and Hide&Seek OpenAI made though, the arm just adapts as listens to a plan....but the other achievement is strange, they learn tool use without planning anything. I really think that won't scale to building real rockets. Thinking of plans scales. Humans think, THEN test. Repeat.


Kimera AI made an AGI? Not really I think....
Youtube vids look cheasy, no more vids either after year.
My thoughts: OK but....GPT-2 could do that, answer a goal "I will make cash" using cause/effect entailment/translation embeddings "by robbing a bank"
And.....yeah...sequences in time are important...reaching goals......
One first vid was interesting, quantum entanglement of data:
My thoughts: I told you this already....its EVERYWHERE....this is Glove translation embedding Learning from Training. You can do a lot of translation, then entail backwards, translate a phrase, and more.....
Hmm, that one vid was great indeed. He says if he knows about goin to a party in 2 months, he entangles with the party knowledge and then therefore also the attendees. Then, a guy he owes 10,000 to is going, they both are interlinked entangled to the party knowledge, both find out, so, he avoids the party! Here, the case's effect voided the cause to take effect. Notices it isn't just entangled knowledge nodes like in Glove/w2v, but also friendship relational connections, society hierarchies like mayor states domains, Canada, Earth.
Like Glove, concepts have superposition and entanglemt with others, so if I learn about a party I'll learn about who's going, and I'll back out of the branch back a question and take another sentence path on my gpt-2 Next Word path. So, like Glove, my path items can be used in Glove and make me back out the branch backwards.


The AGI program will actually be a language, made from a language such as python. All mechanisms in the universe follow the laws of physics, such as moving x to a location, transforming x, segmenting x, or entailing x. It's a space-time thing. Did you notice English words are of 3 types? Type1: run, type2: man, type3: above. Lastly, everything the brain does like entailment and translation Tasks are all language based, because they use patterns in data, a language is made of patterns :).
I
immortal.discoveries
12:32 PM (4 minutes ago)
Actions
Image a stream of words where you'll NEVER see any word ever again in the stream......

The dog goose him zomp tomp hot dfgdfg tt fhfh gfhfgj trtur dddddd rrr rr r rrrrrr rrrrrrrrrr
I
immortal.discoveries
12:36 PM (less than a minute ago)
Actions
As for image frames, you won't often see the same frame again actually, even if you go back in your kitchen and stand where you were exactly. But it is extremely close enough, isn't it :). Same for a nook, it looks like a table most.
i
immortal.discoveries
12:40 PM (less than a minute ago)
Actions
Yep as I said, words are not only visual objects, but also actions. Humans set up the patterns in this language for us already. MOVE ARM UP TOWARDS THE CAT, BE QUITE!

immortal.discoveries
12:43 PM (less than a minute ago)
Actions
Any data the brain uses will have patterns, a language is derived from these patterns.
I
immortal.discoveries
12:46 PM (less than a minute ago)
Actions
This self-aware visioning is the prediction (translation, entailment, segmentation)....you see yourself as someone else and THEN see what would entail. You use past experience to build the future timesteps.
i
immortal.discoveries
12:53 PM (1 minute ago)
Actions
Kimera AI made the 1st eva AGI u know, they explain Quantum Entanglement at end of video (I'll give them credit there):
https://www.youtube.com/watch?time_continue=2&v=INha-RH4o_M

Reminds me of Glove/w2v.....anyone?
I
immortal.discoveries
1:09 PM (less than a minute ago)
Actions
On Sunday, October 20, 2019, at 12:43 PM, immortal.discoveries wrote:
Any data the brain uses will have patterns, a language is derived from these patterns.
And, the brain is made of patterns. Everywhere. The same cortex is everywhere.
I
immortal.discoveries
1:16 PM (52 minutes ago)
Actions
An iterated function is also a pattern.
I
immortal.discoveries
1:57 PM (10 minutes ago)
Actions
The hardware, software, and the knowledge data the brain uses all have patterns. They use an alphabet to compute/talk/move.

Patterns in data allow you to discover similar components, and then do entailment. Language is made of a few elementary letters, and has patterns that allow you to re-use higher level parts like 'tha' and 'thank you'.

The whole brain and even its data is about re-use...efficiency...
I
immortal.discoveries
1:57 PM (2 hours ago)
I can't stress enough that if your data (hardware, software, knowledge) has no patterns: jog them go up you co lo gg tdfdh gg ggg g gggggg g

then it's useless. As soon as you DO have patterns however, it becomes a Language, efficient, can re-use parts, and discover using transformed contexts.

It's true a computer speaker has few patterns actually....wire, cone, body shell, protector plate, logo, etc, but the particles that build it have a common language. In fact, zoom in enough and you'll see the same structure everywhere. But yeah on the high-level it has few patterns - because there is so few high-level parts to it!
I
immortal.discoveries
4:20 PM (3 minutes ago)
Actions
The primitive old way of learning was raw trial and error motor reward learning - learning to walk.

The neocortex now found the better New way to learn - to daydream plans. The solution might be to first retreat back to home, then wait, then go back out to grab the food.

Simulating plans uses Hindsight Experience Replay too, and is faster than acting it out. 


When you slide your fingers along a cup, smell it, see it, hear it....all that contexts helps to vote on classification in each sensory cortex (what you see, what you feel, what you hear). If you dream the rest of the sentence, your brain will do internal votes on the likeliest translation plus the likeliest next 'words' for the movie. You can then try acting it out, looking for cues, and adapt your motor muscles as needed to do the desired plans. Repeat.


Based on what the parents FED it, the prediction :)

Christians say God exists because if not it'd be horrible, we're built in his image and look perfect, we must do nothing. The prediction is based on what they want, and biblical observations lol. You need data outside the bible to base on.

Anyone that says 2050 is the singularity may be trying to fit themselves IN to live.


Ah, see they thought it too, on Numereta, the Thousand Brain Theory, each sensory, each note, all vote on what each other is or will translate into or entail or act and how to adapt as act motors. So for each layer as propagates up each hierarchy, they encode/decode by voting, each hierarchy node increase along the way! So you see coffee mug and it all sensory is voting on if you see a circle for the handle....etc. This is Self-Attention but at each layer each hierarchy in the ex. visual hierarchy.

What about based on all this sensory contexts, you activate motor behavior? And it updates as you move. Of course your actions are not usually based on external sense stimuli, but internal Goals - often you think about them and try out a plan.


Yes the data it learns is a language because it has patterns, context and can therefore do Tasks and learn about this language.

The universe/world is selfRegenerating from various starting conditions, it's general physics, the general intelligence is one that is replicating and resistingdeath because it is repairing/spreading using a lot of context. It's general physics.


"Our work is highly multidisciplinary, utilizing insights from computer science, traditional artificial intelligence, computational linguistics, ontology, neural networks, cognitive science, philosophy, psychology, mathematics, software engineering, and others."
indeed
In short, a computer software geared for vector GPU device based on info/mathematics and physics.


See i was right my mom dad dieing and his wive was crying but not him, he met my mom at her job, didn't tell her, he was happy, because he accepted his death and has nothing to worry about nor nothing can do. Although certaintly i could cry but would resist death by trying to become immortal and contact Alcor, altho he no know Alcor. The pllace that freezes your body.


"Between quality, cost, and time, you can only choose two.
You can have high quality & cheap,    but that requires lots of time.
You can have cheap, and fast,    but the quality will suffer.
You can have high quality and fast,    but its gonna cost a lot."
The highest quality thing in the highest quantity will come in long due time, at a high cost :) ..... the result is the lowest cost lol.

It was cool to see on YouTube on a cryonics video one person said most people are still natural and will cry at a funeral and go to a hospital even when they are religious and it should only be a temporary discomfort, the other person said after that religion is nonsense and has no part of the world, much are similar to what I said about it not having any use for the world and half of your time is not applicable when you apply it to the real world.

 work on one thing at a time by scheduling things
 healthy food, don't spend time eating/doing one thing only
 full body exercise
 just like the brain does in the background you venture off on the news and look for little Timbits that have nothing to do with anything you're actually working on, which is good for finding diverse analogies, although you should be usually working on your work more and looking up things that are very related.

To focus deeper, you need to focus longer, you need more context, you need a larger context window, and you need to up your temperature in precision.

Skaters get better as they practice more, but a professional just starting back after a 5 day break doesn't have it in their temporary mind as strongly. This is more true the less professional you are and the more you have to have on your mind plate.

The reason that happens is probably just because too many exercise key words are in your short-term main-attention memory where the U is thinking and also because the body is tired and your main memory attention is allocating pain senses.

"If your really tired and you dont know why,  just have to keep pushing I think.
Depends,  do you really want the a.i.?   No pain no gain."
I
immortal.discoveries
1:31 PM (less than a minute ago)
Actions
!!!!!!!! YEAH

SELF-ORGANIZE!
https://www.youtube.com/watch?v=zFoQtkFUdd8&feature=youtu.be

ninja gaiden black grass boss hell
46:46 may be what we see at the final moment
https://www.youtube.com/watch?v=o2TY7GUyjOc

TENET game
If you don't keep your focus enough like the beginning of this video, the end at 26:26 will occur too fast!!!:
https://www.youtube.com/watch?v=Hg9wAtpICMU&fbclid=IwAR1eVzPzyC4_0V-bZFr7P7PirLn71wFb07E2A0tXAhVo92GmZ_qAT7gsVy8


its just faster memory...........cpu needs more cache mem!!!!! it so expensive though...............i have 32GB of RAM and could use a lot more really. I just make it half the time. If i did big blender physics scenes i'd need 200GB of ram.


keg is on my ideas too, haha yeah i'm initiating stuff, ranch too!
" One the root concept in swarm logic. 
----------------------------------
 The human cell are suppose to work together. But the is bickering. Cell argue with hormones,  inflammation factors, go on
strike, or in the worse case screw the program and become cancerous.

 With in a company a department have many people who are suppose to work together. But they bicker.
 They argue 
that the others are getting payed more money, or they are dumping all the work on them, or
they are taking all the work and you are standing around doing nothing when the boss come around, and so on.

   Department bosses get jealous of other department bosses. This goes all the way up the chain
of top command. The head department.

 Ya, so when blood sugar get low the cell will complain and do there protesting. The other
department like the lungs, heart and so will complain and do there protesting,
and so on. All the way up to the top in charge, the brain in the head department.

 So if you do a super focus you brain is going to take up a lot of blood sugar and other lower 
parts and organs and the cell that are made up of them could cause problems.

 Just at rest the brain uses 60 percent of blood sugar. As said in this video around 6:38 minutes 
into it.

6:38 minutes  
A Mom Tried Keto Diet For 30 Days. This Is What Happened When Things Went Wrong:
https://www.youtube.com/watch?v=W74jOR9V6UA
----------------------------------
  So when you do get into a super good focus reward you body for being a good back up and
say ' please do not bring me down!'

 When you feel bad and can not focus on any thing then exercise and fast and punish all part for
not working together.

 A well trained body is good. but if you sit around and gain weight you de train your cell and 
parts not to work together. You are out of shape.
 In cave man day, a out of shape body, lets the rabbit get away and the body starves.
 Out of shape body all parts and cell hoard blood sugar and fat causing the condition of 
being diabetic.
----------------------------------
 So, when you feel a supper holly experience, or hear a supper up lifting song have a reward of 
beer or go into a supper relax state until it passes. 

Good playlist. Now this is my style > https://www.youtube.com/watch?v=qFNNP1p4gBA&list=PLGKYrcq6d-1IUY4lTRKvjxcXC24dPWrSD&index=11
I
immortal.discoveries
9:15 PM (2 hours ago)
Actions
Yes! Like cell and brain and country. The hierarchy of smaller parts are supposed to work together and not hoard the energy like when you think on chair but never exercise, but domain randomization from un-equalized energy waves may stir competition and the boss node high up may punish them all! They all may get fired and replaced!
I
immortal.discoveries
9:17 PM (2 hours ago)
Actions
When you talk and generate ideas, you may sound like an idiot, just like motor walking training. But eventually you start walking. Running. Ideas that had no use pull together! Larger Self-Attention window! (Transformers)
I
immortal.discoveries
10:19 PM (34 minutes ago)
Actions
> motor and knowledge "babbling" --- think Hindsight Experience Replay (HER) or Self-Attention using bag of context
I
immortal.discoveries
11:07 PM (less than a minute ago)
Actions
You want (have) some uneven energy in the system, competition is good, but eventually they all align and is even faster at aligning, cancer. When a group of nodes recruits and improves.


K is right! High peeps wants to stay at top level, ideas are easy to generate, cooperation is really self-centric.......because it means more profit for their currently working career..................we need to realize we aren't happy with our life/career though!! Bad!! We need a new solution! We need AGI/immortality, we MUST listen to other's ideas, ideas are worth lots, cooperation can be very active.....

Currently top players are ignorant to non PhDs etc etc, a bit, like if you have no paper, they DO read others ideasss, actually. That is worth a lot. It affects them. As long as you spread your paper, it will get infront of them. So we do make ideas, we do cooperate, the top players do take us up.

But, feedback is slow, unless you need no feedback and have the AGI Paper, you'll be lookin for help-cooperation, which only happens if your paper is again interesting. You must explain how it solves all their problems.


"algorithm mutation
algorithm fusion"
no, idea mutation and fusion!!
idea=algorithm


A THEORY ABOUT FOOD:
Air is all around us, and we eat it so often during the day. One minute without air and you're dead.
There's also heat all around us, and we eat it so often during the day. A few minutes without heat and you're dead.
There seems to be less water around us, and and we don't need to eat this as often. A few days without water and you're dead.
There's not as much food around us, we don't often eat this during the day. You can survive without these for weeks.
There is rare nutrients, that you don't often eat at all. You can survive without these for years. Often you eat your Essentials but you're missing important nutrients.


As said a year ago, type the rght few 1000 keyboard smashes and you make god come out of yer computer! Though we will need hnts, as is currently unfeasable.


Wait a minute, they said it was 200 seconds to run it using that quantum computer but would take 10,000 years on their best supercomputer.

"While our processor takes about 200 seconds to sample one instance of the quantum circuit 1 million times, a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task," the researchers said.

"The researchers estimate that performing the same experiment on a Google Cloud server would take 50 trillion hours—too long to be feasible. On the quantum processor, it took only 30 seconds, they said."

This means, although they say it may be false hype, they DID get the test done.....it is true....

Why would it be a lab case? Was the algorithm they ran the quantum circuit itself? And why is that cheating?

"Dario Gil, head of IBM Research, advises against using quantum supremacy as a metric with which to measure progress in the field. "The experiment and the 'supremacy' term will be misunderstood by nearly all," he told Fortune."

"Gil described the experiment as a highly special case "laboratory experiment" that has "no practical applications." He added, "Quantum computers will never reign 'supreme' over classical computers, but "


Hmm, I like this Topic.
Notes:
missing data.
universe/world isn't lossless (present future has missing data).
Lossless Compression is not the best rule.
self-regeneration works as best compressor (Physics is).
a brain with gold facts makes more gold outputs, which are imperfections, optimal ones!
"These future observations may be thought of as missing data for which optimal imputations are the output. ".
language goes from real cat to word cat.
Hierarchies like in the brain make the future missing data by quantum entangling an analogy (info creation) using Translation and Sequencing.
"Mean imputation is a method in which the missing value on a certain variable is replaced by the mean of the available cases." --- SELF-ATTENTION (Transformer net architecture).
Indeed you need a metric, you will need context and a meaning understanding like in GloveGPT-2 for doing translation replacing of words/phrase parts.

(I did find that physics/AI/evolution/intelligence are VERY much the same thing)


Let's say you have an array of n elements, and you want to pick a random element from the array.  This is easy - choose each element with probability 1/n.

But say you receive the elements one at a time, and you don't know how many elements there are going to be.  How would you do it then?  You don't know what n is, so you can't pick each element with probability 1/n.  In theory you could wait for all the elements to come and then do what you did before.  But the stream could be very long, making this infeasible.  We want a solution where you don't store elements and you don't look at them twice.

As it happens, you can do this as follows:
Choose the first element with probability 1
Replace the first element with the second element, with probability 1/2
Replace the current element with the third element, with probability 1/3
Continue doing this until the stream ends.

This algorithm takes constant space and linear time.


./sleepsort.bash 5 3 6 3 6 3 1 4 7

The basic idea of sleep sort is to print the number i,after i seconds (or time units).

For example, if the numbers to be sorted are 5, 3, 9:
Print 5 after 5 seconds;
Print 3 after 3 seconds; and
Print 9 after 9 seconds.

and woah, you realize that you get a sorted output!

Though the practicality of sleep sort is a big question, it certainly is a very beautiful algorithm.


hierarchy heterarchy compressor probalistic
quantinize
redundancy
10. Random Number Generation
hash incremental??????????????????????????
yes binary trees are really incredible algorithms

--Andey wrote a Paper but didn't follow the template they want it in--:
Say you wrote it and tried to follow but couldnt find it
say it'd cost so much time and is not efficient to redo it
make a reason why its still readable or has good ideas. . . .


When look at something close on your lap, or a star, it is in the past by the time you register the sensory. Stars are up to millions of light years seen in the past. Electromagnetic waves let astronomers see the universe from the past. Hydrogen is the most common atom, and releases a 21 cm wave that is just in the middle of spectrum, and as the image of waves is captured - farther away masses of hydrogen give off more spread-out stretched waves because the universe is expandin and yanking at them.


Punishing others is only good if must meet again (or meet other civilians even if u not see him/her!), or to feel good if you think a lot that you must get retaliation, or good for role play. Otherwise it is waste of time! Justice is BS in that case! Just stop the pains/deaths.


As nodes align the beam strengthens.


"Whether we are a detective trying to catch a thief, a scientist trying to discover a new physical law, or a businessman attempting to understand a recent change in demand, we are all in the process of collecting information and trying to infer the underlying causes. -Shane Legg"


This ability to generate millions of true facts on demand is similar. Here, they are all true enough and have high probability. Dogs have 2 eyes. Cows have 2 eyes. Snakes have 2 eyes.....But then I thought wait, this is just a Glove thing, no? I mean, a new discovery is just a sequence of Word Prediction. The model to do it is already there. - But, I guess since I myself can brainstorm and use my insights to think deeper, it is true then that if you generate facts and then store them it will allow for the generator to generate now predictable facts. If it runs too long without looking at real world data then it will become too abstract. Cars have air bags. Cars are safe. Cars are child-proof. Cars can be sucked. Cars=air bags, Cars(air bags)=safe, Cars(air bags(safe))=child-proof.


"What does it mean for a system to understand something?"
All data in the universe has patterns, if it didn't then a sentence would never have the same word appear twice "the cat catz cttz sd sdd ab abab abaaaaaa amm mmmm mm m mmm". Understanding something is recognizing something ex. cat/dog matches cat. The model like Glove has a understanding of patterns, it can predict using its data model, so it can recognize something, and transform it (just deeper recognition i.e. translate/entail it)


For both motor choice and knowledge thinking, Hindsight Experience Replay (HER) allows you to store actions or facts for later use. You may guess of 10 doors the wrong 6 doors, each with a unique object behind the door, but after 10 checks you have 10 answers/facts (or all 10 basketball moves (hoop score, hoop boomerang, hoop miss, teammate knockout lol, etc). These bag of tidbits are not just useful for later need, but also useful for mass context voting in Self-Attention bag of context for downstream tasks like having 100 words in a sentence disambiguate a word to translate.


"Natural language is shared symbols with commonly agreed upon approximations transmitted inter-agently for re-rendering concepts."
Most sane part.

"we are creating the universe more than discovering it"
"Are we actually creating the universe? What caused the uncaused cause?"
The universe has a beginning because you can't CREATE something infinite. For something infinite in time and law is the existence itself? Too much data. Nah. Universe has beginning because it can roll itself out on its own! No need for a infinite bidirectional tape. But what about multiverse? Our computers can make a multiverse using this one verse, meaning there is only 1 logic behind the scenes. It seems like it evolves onward from the big bang. That evolution includes 'discovering'. Evolution is creation in its own sense.

So it seems the least simple solution here with less data is to have a beginning, and to have 1 logic computer to run the universe. The question then is, particles behave weird though? They have logic but their laws/behavior seems unique. Why? You can have so many features in a high layer but only a few base elements at the lowest layer. They can move, rotate, grow, spawn, despawn, update laws. These laws affect particle/s interactions which affect their speed, direction, size, existence, law. It could seem plausible that the only interaction they can have is the change in speed and existence! All physics may be based on a cycle of grow/deflate magnetism, like how satellites circle the planet or atom, but then kick off as radiation or fuse and crash. When too many satellites crash, it eventually confesses it's greed and gives it back as a star burst. Occam's razor says simple answers win.
I
immortal.discoveries
1:39 PM (13 minutes ago)
Actions
"they can have is the change in speed and existence!"
Mistake. Particles in any universe would have location/rotation, speed of change of location/rotation, magnetism growth/deflate, and existence. The size of a clump has a larger/stronger field.


discountation:
"You're right that the discount factor (so-called γ — note that this is different than λ from TD-λ) acts like an "urgency of life" and is therefore part of the problem — just like it is in human lives: Some people live as if they'll live forever; some people live as if they're going to die tomorrow."

Trying to reward a machine and tell it [what] to [do]. Deals with actions tried, environment, knowledge, updating beliefs. Some people are lazy or plan things too far away or too close in time


CONTEXT BAG (COMBINATIONAL EXPLOSION OF WISDOM/POSSIBILITIES)
The bigger a meteor is the more context (insights) you can get, not only do you have a lot more Timbits the bigger the meteor is but just after a few more particles are on that meteor you get a huge explosion of a lot more combinations of context that can all vote and weigh in on a solution and allow you to generate a Next Word. I can babble about stuff and just talk about it and it might sound stupid at first really stupid but it doesn't matter because I can use my bag of context and it works, I can actually make a lot of discoveries iteratively and be saving them and then reusing them to make deeper ones without going to look at the real world to test them as long as I have a large enough knowledge base to have that explosion effect. You can translate a word in a sentence or even just fill one in using context ie the more words in the sentence the better you can do it, you can say it like the cat was eating food or the Tiger was eating food if there's a lot of cat words in the sentence, or if the context of the sentence is about a product at a store you can say I was going to buy the cat product or you could say I was going to buy the wheelchair product, so you see here the word to fill in a Translate/transform or add to the sequence to the sentence can be totally different based on the context because it's similar and context and not just directly word to word wheelchair to cat.
------------------------------------
The higher you look on the level the more structures and effects you can get, there's a lot of different effects and theories in physics from Quantum to the strings to tons of types of atoms and particles to the Higgs boson from static to even tons and tons more of molecules and so many different types of objects from animals to cell phones, yet all we need to look at it's just one atom, because the atom is made out of all we need really. the periodic table only has two hundred or so atoms and the bigger ones are just more of the same stuff the electron the proton neutrons  and the photons. I'm starting to see patterns here and my data by the way. There's only just so many types of atoms because the larger they get the more unstable they get. If we look at the higher-level we see the same effect, when planets get too big and have so much mass this causes a greedy hoarding in fact but they actually give it back thankfully, they got so huge and they explode they burst they ignite they give off heat waves because they are unstable and the probability is high that they will give off. These planets become Stars. They give off radiation just like atoms will. They also have orbits of electrons. The same thing must be happening to make them unstable when they got too big but it always starts with the atom, the radiation that comes out of the atom in an unstable atom is boiling off has a heat burst just like boiling water or even in space a boiling water spear. This happens because the random motion in the clump and the atom or the water is that a very hot temperature and has more motion and the glue magnetism isn't has holding it together. Now of course this can happen with a solid too if it's extremely hot however we're talking about a planet or even just an atom that is boiling off radiation but it's not extremely hot, and therefore the reason for the unstable burst to occur in the atom is because they're actually indeed must be more heat than there usually is in a smaller atom type, and indeed there would be if there is more particles in the nucleus of an atom, all the heat motion has to do is move from one particle to the next and suddenly you have a un-aligned motion heat burst but one wants to jump off the other way and has enough to do it. When this happens it is an unstable Atom giving off radiation resisting to grow any more into a new type of atom and will be short-lived but would be able to possibly grow larger if cooled, or in the case of a planet becoming a star it will start a chain reaction in the core and the planet gives off extra heat that which the glue is not holding these particles together. Now you should be able to get a larger planet if the planet is made up out of smaller atom types and not such unstable ones. Atoms and planets both have orbiting moons. We have seen here atoms will explode when they are too large or have a huge amount of atoms surrounding them like on a planet. It could be possible a Galaxy could explode if it gets too big even though the planets are not overly big and the atoms are not overly big hurry up
------------------------------------
My theory is a pretty simple one and I actually like it. It is that you have particles in space that have a location, they have a speed for changing the location, and finally to not be just a gas there is a glue that holds them together that passes a threshold of heat direction-diversity.

Because if a laser beam hits a person it will move that person and the light will convert to motion and that motion will definitely also can be converting to heat, solar sails exist, we can see with boiling water that too much heat will jump off because the moving atoms or particles are simply motion from a photon carrier moving in an unaligned direction as random motion and there appears to have no motion but indeed the burst is motion - heat. And light definitely transforms into heat or motion and they are the same thing, it is a carrier of the particle that affects the speed of a particle.

If you have a circuit board, the atoms stay what they are and they keep their location too, but there is definitely particles moving around on the circuit board and travelling around a copper wire race track as if they are glued to it and they are moving with motion but they are glued to the to the track.

 We might be able to run and make a simulation that is based on the simple idea.

If we look at an atom as said, we see particles, a glue effect holding them together, and we see motion. All the effect we have been observing adjust motion. This magnetism glue is just motion really.
------------------------------------
------------------------------------
Ah, look > https://www.livescience.com/18141-wacky-physics-particle-flavors.html

The 6 flavors in quarks and electrons are wacky, and they don't know why they have different measurements of mass etc! And some flavors are more rare less common and decay faster!!!! Proof! It is same thing. Heat decay. It boils off from random motion.
------------------------------------
So we got a lot of high-level effects, you can't look at those at-all then right? Okay let's look at the atom since it's all we need to look at. But wait, how then will we decide there so many different types of particles like the up quark and the down quark and all that stuff, I mean are we supposed to use sensors and get some more high-level effects which we don't want to look at? No. So we can't use sensors. We must use logic what we know about the real world. Maybe. We also have to group measurements at the small level I mean up there is 6 quarks then let's just say there's 1 quark with a approx. mass. Anyway my point is that an atom is made of particles and all these particles can do is simply move their location using heat and any sort of like property or effect or laws they can have on other particles would simply be heat transfer because all they can do to a neighbor is move their location (or rotation). So it seems to be that we have particles and we have heat motion and the third type of particle would be there glue magnetism. It's sort of like the opposite of heat we're at Pills things and it can even align them maybe and this magnetism seems to have two poles maybe which might give off the impression of an antiparticle.
------------------------------------
Now that we know a glue particle exists that is different than Heat, We can question whether magnetism with two magnets really is heat or is that magnetism, it does seem to be sort of like a glue that wants to pull something in together or rejected away from each other indeed, and we know that the attraction of magnets seems to be the overcomer effect because also there is repels there is the random motion and it all instead pulls together and all the magnets on the table just come together and the only way they will come apart is by that explosion not the rejection of the negative poles. However in larger systems you may see the bursts come out the poles because the magnetism is weakest there?
------------------------------------
some Facebook chit chat - my replies:

Ocam's razor states the simple answer is the best
i said before that time dialation was just the machine thinking more slowly, but you don't agree
they don't find large atoms as commonly, nor larger atoms
in planets, it's mor because of the atom nulclie fusion though, back to the atom lol. But maybe its just the quark decay of the up/down all this time.

i was thinking of a simple theory of physics btw even though it's probably silly lol, that there seems to be just: particles that have a location and a heat/motion that changes that location (observed laws are just speed of location change...), and lastly (to avoid having a gas of randomly moving particles) requires a glue/magnetism.

i mean red light, bubles in a reaction, it is all brain processing....just particle location changes!!

if you look at a star, nuclie, quark, or boiling water, the burst radiation is motion clearly, but it is just a lone heat 'particle' that was not aligned with the others....alignment of heat is motion, not heat

well, you have a particle, a quark say, but, why should it move? You need a particle that DOES move......and we have those - photons move at the speed limit !
photons move at light speed, a quark doesn't move at al.....
my conclusion all this time is light moves at the fastest speed in the universe, and other particles don't, at all, move....so clearly photons move/cary with them other non-moving particles

so why is there the strong force and also gluons?
(i must research more)

have you ever found a particle simulator?
i know they cant run trillions of particles but, if they get it right, they can see some effects
i mean a board made of a few thousand atoms i mean, test electrical circuits using actual atoms
i'd research that as it seems interesting
last i tried in blender i can run thousands of particles, with motion, the only thing they missin is the glue to hold em lol
just simplfy it
oh cus the gravity from one particle act on the rest so far lol?
you can simulate bubbles without atoms etc
high level
maybe, the laws of physics, are not based on the particles, but the high level effects, because i mean, high level simulation gets the same result with the need for atoms
all they use is particle objects, and motion, to name a few....
it may give insight into the particle laws i mean

https://en.wikipedia.org/wiki/Standard_Model?fbclid=IwAR0BqBsb0u71VzEPY_b-uVQDTHEoyKQIEsROejbmdSx2XSusdkniL2P9hnc

ok so, oh yeah, electromagnets.......particles.....motion......glue......checkcheckcheck and, um, it is glue? The moving electrons grow a glue field? May be a hole in my theory.

quarks don't have motion, yeah they bigger/heavier and the photon and may move slower but nooo cus we can remove heat from a body !


it worked
https://www.youtube.com/watch?v=vTYp5Kd9nMA

https://www.youtube.com/watch?v=-ZNEzzDcllU

https://www.youtube.com/watch?v=6yaY4Fw-ovM

superposition
entanglement

Possible reference to Glove (meanings of words based on other words). A word can be in 500 dimensions at once, cat=dog/puppy/horse/etc. Quite useful. The entanglement is the dots in this volume - image 3 dimensions (a cube) with points in it, cat is near dog etc, these dots are entangled. Maybe the entangling is the actual network that stores it :-)
I
immortal.discoveries
10:13 PM (1 minute ago)
Actions
Better put, a qubit/dot in this 3D cube can be in 3 dimension (or more) (superposition), but, you need OTHER dots to relate to ! - Entanglement. You could have 3 dots and 100,000,000,000 dimensions. Or 100,000,000,000 dots and 3 dimensions. However having more dimensions than dots is useless.


Ok I read 92% of it. I skipped a paragraph at the end and I can't say I read e-v-e-r-y word in the first 8 paragraphs lol. I got very little from it. I could've explained it using 5% of the text you wrote. Lots of fancy words also, it was like Chinese. I agree with none of it. Nonesense.          :p

At the end you say understanding is higher levels, as they have exponentially more links/data. Yeah. But GPT-2 has the understanding/commonsense reasoning already and I already have the best graph schema you seek. It combines alphabet building and slips Glove on top (pun unintended). Embedding are linking high level phrases/words to others. Then you link one of those to another... All you need is hierarchy and heterarchy. Glove is good at saying what is what, and what entails what also, even good for linking. Hierarchy is good at entailment, not so much translation, and is possibly well suited for linking node.

(see the 3 trainagle pics in my general pic folder)
So that's the best graph/net. GPT-2 may have the better one actually, especially the implementation. As for this 4 level triangle though, that simply goes IN the graph. Let me explain. First of all, THERE ISN'T INFO, DATA, KNOWLEDGE, WISDOM. Lol. My graph schema has nodes, a, b, c, d....hi, cat, cattle, cat ate the, etc. It can link them. It can build. It can link groups by using glove and then linking two major vines in Glove using the hierarchy. It has all that dude, it has it all, all types, nodes, links, building, groups, dude. But none of these, no matter the level, are data nor wisdom. Btw I could make it 55 steps up: data, meta-data, info, vetted-info, knowledge, geniusness, wisdom, coolStash, alienStuff....Lol. You need my/gpt2 net and the data that sits IN it is emulating this 4-level appearance brah. Let me now look at these three images, now that i read your paper. The simple pic - all these phrases can be stored in my hierarchy/heterarchy. Also, any definintion like cool, sad, important, human-written, unique, wisdom, data, can be linked to it as a node, you just store it, and the word, ad link em. It's all, ALL, knowledge, you just say it all out your loop/or mouth. Image2(brown blocks): i see it says really-good, frequent, useful, summarized, evolved facts upper you go in the triangle. My schema can link all facts to tags like said, - categorized, interesting, summarized. My nodes feature frequency and rank though, but not everrry word like tageed with categorized, cool, happy, etc. If your decision is of 7 choices, the ranked, frequent default, (and if want happy/cool choice), is the chosen candidate! Left-side of pic: BADBADBAD it is allll context, meaning, and insight. Omg. Pic3 (fruit loop stuff brah): left side same issue!! Purpose is all the nodes/links too rrrr! And nodes with rank neurtransmitter is also purpose trully. Right-side of pic3: OMG, my whole schema is context, it is ALL entailment of future (the why/how) and whatISwhat, it's all patterns...

I now placed your article in my trash bin hehe
(sorry but i just did it ... and thought it was funny lol) I do think its cute and suggest you save it.


Big Data
Big Compute
Big Money
Big Company
Big Error
Big Confusion
If you utilized small data to its fullest potential, it'd be more like:
Small Data
Small Compute
Small Money
Small Company
Small Error
Small Confusion
We still need the grand neural schema though. One that fits the Glove over the Hierarchy and goes beyond GPT-2.

Luckily we have this place where we get together and share knowledge. One day, possibly soon, the knowledge we share will be highly readable and will make sense to most members. Only then will everyone get closer thinking and feel more like a team. Right now a lot of knowledge is all over the place like a paint splash.

You can make a team right now, even though we are already all here, and it'd be like OpenAI's team, meant for seasoned practitioners well specialized in their 'language', but, if we wish to make such a team faster and accept diverse people in on deeper cooperation, we will need to clarify our concepts deeply and summarize lots, and share them.

I heard OpenAI mentors scholars that are picked for their work. Why must they mentor them and not use a compact 10 page training course? Because they DON'T KNOW WHAT TO SAY.

Do you need a mentor to explain 2+2=4? No, a ultra small pdf will do. It'll work for most humans.


So a too-big AND/OR hot quark, or atomic nuclie, or planet, or pot of water, will burst off radiation flares.

Note that for quark, nuclie, planet, it is always the quark...........HOWEVER........in the pot of water the heat makes the ATOMs kick off........

Therefore it seems radiation releases are due to random heat motion (not aligned i.e. motion) in a object that where the gluon meganetism fails to hold the particles together.

So releases happens when too big, and/or, too hot. Too much of something, but always the heat actually.


Do you mean, instead of feeding the net data and learning, to instead request new output data/solutions?
Doing so is how we train ourselves, mentally. We can step forward, chaining. We store the data output AND update the model. If you know enough knowledge, you can stop eating data and start fabricating deeper discoveries yourself now. It's better you research along the way, actually.

As for w2v, the net has every word in it, cat, hi, dog, home, run. Based on context, you link words to each-other with probabilities of how similar they are. You just started a Heterarchy. These are update-able links (weights). Done digesting your data (ex. ran out of data), you can keep building your Heterarchy. You say OK cat=horse/zebra/etc and dog=horse/zebra/etc, so I'm going to make a link between them / make that link stronger. This is Online Learning. This web is meant for translation tasks, even entailment, because entailing words 'look' like similar words, and you'll generate either 'dog ran to' or 'dog cat horse', which are both sentences. The latter is a list of items.
I
immortal.discoveries
9:19 PM (1 hour ago)
Actions
The cool part is you can tag a word like 'food' and 'sex' in that heterarchy with a red and blue die potion chemical neurotransmitter reward rank and see what words get updated and relate to food and sex. This is how our desire for food can, through mental discovery, make us seek paper clip manufacturing. It all leads back to food and sex.
I
immortal.discoveries
9:22 PM (1 hour ago)
Actions
What's cooler than that Heterarchy? Using multisensory data. You can say OK word shepard=sound bark/etc and word dog=sound bark/etc so therefore i'll make a link between these 2 senses / will make the link stronger.

It's one big bag of context dude.

I got more goodies too, but later I will show them all together.
I
immortal.discoveries
9:25 PM (1 hour ago)
Actions
Oh and that heterarchy can be displayed as a viz too.
I
immortal.discoveries
9:45 PM (57 minutes ago)
Actions
If you have a 3 dimensional cube of space with 5 dots (words) in it, you have a dictionary of 5 words that each has 3 axis values, 15 values in total, 20 items total. This tells you the distance between any of the 2 dots of the 5 dots (just draw a diagonal line to the other dot in the 3D space to get the distance).

If we now look at my web, it is ALWAYS 2D. Each word in your dictionary is there. Each have a link to other words, with a relational proximity. Of course this may cost more storage/compute. You can always draw a viz. Let's focus on my 2D web heterarchy. You have an array of words, your dictionary, 5 words, and instead of 15 location values you could end up with, well, if each word links to each word, 10 values, wow! 15 items total! What's more is it gives you more precise flexibility, more dimensionality. Infinite! Let me see here....
I
immortal.discoveries
10:25 PM (17 minutes ago)
Actions
https://ibb.co/CVTW3qL
I
immortal.discoveries
10:31 PM (11 minutes ago)
Actions
In this image I drew above, I compare my web heterarchy to the w2v concept.

The red numbers (zoom in) are the similarity probability approximation proximity.

Mine does have a tad more storage (you could store words/numbers as huffman codes maybe).

But mine allows higher dimensionality because you can tweak a link without affecting the others. And a viz, whereas w2v with 500 dimensions has NO VIZ whereas mine can because it's always 2D! Mine also is update-able Online Learning. And can be interactive to the user.


The universe doesn't need to begin as a long movie with all frames already made. It just has to let physics roll out on its own. We can test this in a computer simulation. The universe generates the "missing data". The universe starts off lossy, but by the end, it's lossless and has ALL data in a file. Or brain. So while it seems like a lossy file cannot become lossless, it can. Lossy kicks lossless in the butt. And we can simulate this on a future computer. Self-regeneration works as the best compressor (Physics is). Re-use/ recognition/ patterns/ superposition is everywhere in the universe, things pull together and fill in missing data e.g. dead employees. "Understanding"/ "recognizing" allows filling in missing data using other related data.

Lossless compression is ALREADY lossy compression, because after compression, the data is missing, until you decompress it. As for "Lossy compression", it may result in words missing, however a human can add them back, so even Lossy compression results in the data back. See, same result. It auto-regenerates.

So ya generating the approx. missing data using approx. imputations. The induction prefers smaller simpler related frequent approximate solutions.
"Then, you encounter new evidence—new observations—and you update your beliefs/probability in response"...........we look for information, if we don't have it
"It calculates this probability based on the prior probability of the hypothesis alone, the probability of the evidence alone, and the probability of the evidence given the hypothesis."
"Choosing the correct priors (evidence or questions) is of your first step."

Turing machines are the simplest computer, binary strings are the simplest language (can run videos, parellel circuits (sequentially), etc). What if there is the simplest logical AI? The simpler the computer, solution, or AI, the more likely it is the best *working solution.

"Another popular method of approximating any intractable algorithm is to use randomness. This is called a Monte Carlo method. We can’t test all hypotheses, so we have to select a subset. But we don’t want our selection process to bias the result. Therefore we could randomly generate a bunch of hypotheses to test. We could use an evolutionary algorithm, where we test this seed set of hypotheses, and keep the ones that generate data closest to ours. Then we would vary these hypotheses, run them again through the Turing machine, keep the closest fits, and continue this process until the hypotheses actually predicted our data exactly."


When you think down branches the best, frequent, likely answer to choose, the effect from cause itself ma make you turn back a few steps, because you think, hmm, if i go to the strip club, i may find some bills THERE, but then on the choices is a gunman too, you think, not worth it!, nono no!! Back a step.

There is mostly bacteria etc, not as many human cell mass, and even less scientist humans! Even less AGI-knowledgeable top players!! When one gets to the goal first he updates the rest. This always happens. Exponentially more recruit near end times. But can someone in AGI teachmentor a normal human in a few days?


No really, lossless compression gives you a file that is no longer your original file. Only after some sort of luck do we manage to get the same file back, same for if we did lossy compression.

You could lose your lossless decompressor ( ͡° ͜ʖ ͡°)


Let me formalize my proposal.

If we apply Lossless Compression on a text file that contains the string "2+2=4", it results in missing data because the new data is smaller in size (because of compression). Physics results in the data de-compressing back and we have our original lossless data back.

If we apply Lossy Compression on a text file that contains the string "2+2=4", it results in missing data because the new data is smaller in size (because of compression). Physics results in the data de-compressing back and we almost have our original lossless data back. Say our data is now "2+2=". A human could easily walk by and say, hey, 4! Suddenly we have our original lossless data back "2+2=4".

Therefore lossy=lossless. Self-Regeneration is unavoidable. Earth could end tomorrow but we'd still get the end result anyhow, the fully optimized planet ASI.

Whether you try to generate it all back or fail and a human helps, it's all the same thing.

I know that's against your rules but this is the real world, your rules are too localized. While one is slower they both are self-regeneration.


1) As OpenAI did with OpenAI 5 for beating Dota best players, and for the robot arm, run many simulations against itself (and past versions 20% of the time),
2) make each randomized ex. starting location, size of enemies, your health, lighting, time to finish, everything
3) sync/combine/average all together into a single policy, 
4) "Using a separate LSTM for each hero and no human data, it learns recognizable strategies. This indicates that reinforcement learning can yield long-term planning with large but achievable scale — without fundamental advances, contrary to our own expectations upon starting the project."

 A Dota-playing AI must master the following:

Long time horizons. Dota games run at 30 frames per second for an average of 45 minutes, resulting in 80,000 ticks per game. Most actions (like ordering a hero to move to a location) have minor impact individually, but some individual actions like town portal usage can affect the game strategically; some strategies can play out over an entire game. OpenAI Five observes every fourth frame, yielding 20,000 moves. Chess usually ends before 40 moves, Go before 150 moves, with almost every move being strategic.
Partially-observed state. Units and buildings can only see the area around them. The rest of the map is covered in a fog hiding enemies and their strategies. Strong play requires making inferences based on incomplete data, as well as modeling what one’s opponent might be up to. Both chess and Go are full-information games.
High-dimensional, continuous action space. In Dota, each hero can take dozens of actions, and many actions target either another unit or a position on the ground. We discretize the space into 170,000 possible actions per hero (not all valid each tick, such as using a spell on cooldown); not counting the continuous parts, there are an average of ~1,000 valid actions each tick. The average number of actions in chess is 35; in Go, 250.
High-dimensional, continuous observation space. Dota is played on a large continuous map containing ten heroes, dozens of buildings, dozens of NPC units, and a long tail of game features such as runes, trees, and wards. Our model observes the state of a Dota game via Valve’s Bot API as 20,000 (mostly floating-point) numbers representing all information a human is allowed to access. A chess board is naturally represented as about 70 enumeration values (a 8x8 board of 6 piece types and minor historical info); a Go board as about 400 enumeration values (a 19x19 board of 2 piece types plus Ko).
The Dota rules are also very complex — the game has been actively developed for over a decade, with game logic implemented in hundreds of thousands of lines of code. This logic takes milliseconds per tick to execute, versus nanoseconds for Chess or Go engines. The game also gets an update about once every two weeks, constantly changing the environment semantics.

Our approach
Our system learns using a massively-scaled version of Proximal Policy Optimization. Both OpenAI Five and our earlier 1v1 bot learn entirely from self-play. They start with random parameters and do not use search or bootstrap from human replays.

	OPENAI 1V1 BOT	OPENAI FIVE
CPUs	60,000 CPU cores on Azure	128,000 preemptible CPU cores on GCP
GPUs	256 K80 GPUs on Azure	256 P100 GPUs on GCP
Experience collected	~300 years per day	~180 years per day (~900 years per day counting each hero separately)
Size of observation	~3.3 kB	~36.8 kB
Observations per second of gameplay	10	7.5
Batch size	8,388,608 observations	1,048,576 observations
Batches per minute	~20	~60

Our agent is trained to maximize the exponentially decayed sum of future rewards, weighted by an exponential decay factor called γ. During the latest training run of OpenAI Five, we annealed γ from 0.998 (valuing future rewards with a half-life of 46 seconds) to 0.9997 (valuing future rewards with a half-life of five minutes). For comparison, the longest horizon in the PPO paper was a half-life of 0.5 seconds, the longest in the Rainbow paper was a half-life of 4.4 seconds, and the Observe and Look Further paper used a half-life of 46 seconds.

While the current version of OpenAI Five is weak at last-hitting (observing our test matches, the professional Dota commentator Blitz estimated it around median for Dota players), its objective prioritization matches a common professional strategy. Gaining long-term rewards such as strategic map control often requires sacrificing short-term rewards such as gold gained from farming, since grouping up to attack towers takes time. This observation reinforces our belief that the system is truly optimizing over a long horizon.


"In reality the separated colours of the rainbow don't exist, it's actually a smooth spectrum, the human brain creates the divisions/ categories.  Logic/ mathematics/ language are built from the same phenomenon."


I saw Hacker News and thought you'd have more hands on.

About https://openai.com/blog/emergent-tool-use/


You can see my first questions here: https://github.com/openai/multi-agent-emergence-environments/issues/9

My new study I wrote today is in the attachment. I'm really interested in grasping what's going on. I do have 5 years of 24/7 researching on AGI and a lot of time/money nonetheless the terms they use are by far boggling my mind. This and GPT-2 and the Arm seem to be most of OpenAI's knowledge, meaning if I understand this, I'll be very far in my research and able to implement my own. Try using the terminology/simpleness I use in the attachment. Let's start on a high-level.


For GPT-2, I almost made it from scratch using no nets, and I can explain it probably well in my next email.
&
Who's been watching OpenAI closely?
&
IF you haven't, speak up. I assume everyone here has checked their site out.
&
Prior understanding:
Who here can explain the following to a 5 year old kid, or an old mom, in clear English?:
https://openai.com/blog/emergent-tool-use/
I.e. how well do you understand it? ; How many people can you recruit into AGI?
How well can you summarize it (in 100 words. And 200 words. And 500 words.) without any filler content and only main points?
Can you make it seem boring, like cake?
Who here can draw an intuitive viz so mom can understand it?
&
Here's my go at it.
1) There is a simulated world with 2 teams.
2) Looking at their Paper, each little man has the choice to move forward/backward, rotate, grab a nearby object, and lock an object in place.
3) They seem to start off randomly jiggling around/etc, but learn tricks, general tricks that work in diverse environments. For example, they seem to learn that usually holding an object is better than not, or if a teammate is nearby it is more likely to win, or if it makes non-sharp turns around walls it will run faster. They will decide when to use/combine these, and do so usually, based on when they see the recognizable cues.
4) The red team learns to move towards the blue team members because one of its jiggling moves got it in its line of sight. So now the red team found general tricks to find the blue team and wins the game goal criteria. Until now, there was only random jiggling, but then it won a game and found what, and when, will work, for diverse environments.
5) The red team cannot further optimize, but the blue team can. The blue team learns to use cubes to trap themselves in a room by random jiggles. This can be used in different environments, and the learning  of this technique could-have stemmed from the desire to usually carry boxes around.
6) This competition continues until it can no long continue. Total/global-optimization of evolution.
7) At the last stage, when OpenAI moves a ramp to the back of the room, the blue hider learns to prepare his friend's box and then goes to get his box when the other arrives with the ramp, because the time limit before the red enter the room is so short. This, without a sentence/plan generator that reasons its plans, must be the result of a brute-force algorithm as mentioned. They do use past tactics to learn new ones, which generalize to new environments, but this was clearly learnt from randomly jiggling around (while doing learnt tactics, so not completely random jiggling). Btw it's a small environment really, not many objects or time for long game plans. And multiple plans result in the same win/solution.
8) So it seems that they learn by randomly trying actions. These actions will work in diverse scenes, when they see similar cues/conditions. This behavior, topped with some random behavior, can result in deeper learnt behavior that wins deeper games.
9) They say they use algorithms GPT-2 uses. They look at objects instead of words, and decides what Next Action to write to the story plan. If it sees a scene it's seen, and knows what it did most frequently in previous won games, it will use that learnt action.

The Update:
Ok I got it, they run millions of rounds....at stage 1 they start random and musssst accidentally do the right action to find the hiders or run from seekers. By stage 2/3 they use boxes by accident, they must, there's no hints. The only thing they could have learned so far is that they won last game when their friend was nearby seen! Or make sharp corner turns. The ramp use, unless they re-used the other agents's model, same thing, it was a accident! The only search space hint was to be near their buddy.....boooo....anyway by the next stage the blues learned to take in their ramp, this, was accident too although they had hint supposedly i assume of running around with objects, near buddy, buddy does same behavior, and to go back to base, and make sharp turns. By the last stage, the helping out of his friend in preparing his box, again, ACCIDENT! Why would he go up to it? Ok...to carry it, but it required a random behavior of doing so then dropping it, and that one worked, he didddnt know it would dude obviously! So it is a combo of raw simple RL, competition like GANs do, and some hints of what is what and what worked entailing that.
&
Let's not forget they trained on millions of diverse scenes, the agents could have learned what didn't work and what did work and diminish the 'error' maybe. So instead of learning what wins games by clearing the time limit fully, just clear it longer...


So...
1. I want Bob to eat.
2. I want a fruit to eat.
3. I want food to live.
....GPT-2 has a tendency to translate its window and then translate its Next Word canidates, meaning "and Bob was eaten by an" my be 'seen' as "and apples were eaten by an" to gather canidates: animal, Bob, apples, hats. Then it picks 'apples' because of relation to words in the story and frequency of appearances as the Next Word and the relation to next words like 'animal'. My solution to this is it has to have seen it entail before, enough times at least. It may not be exact, ex. it's seen 'bob was eaten by a carrot', but as long as it knows carrot=apple 98% likely, then it can know it entails. In this case it doesn't. So it'd ignore that candidate 'apple' and 'apples'.
&
So all you need is frequency, a basically exact match proving it's 'normal'. ;p
&
It may be distant if you attempt to check if the end exists in memory,
"and Bob who i know very well was later then eaten by the thing that is called a bright red apple"
but gpt2 seems to work with distant sentences?


unsupervidsed  one-shot  imitation-learning  inter-agent  self-play

Did yous ever think GPT-2 could self-play itself through imitation? Imitation keeps the topic same but also learns knowledge quickly by passing it down, which requires communicating with other related agents.


you mention chess........yes, you look at the board, see your guys, see moves they could do; choices, in fact you first choose your guy, then his actions that aren't blocked.

Which guy? Which move? It's a tree. Of paths to go down. Each branch has a vote. That's where multiple ANDs combine together. OR means do choice B branch if you see x or y, i.e. either cue will vote on the branch B heavily enough on its own without needing AND (both cues).

Focus on treeing, not AND,


The only people advancing AGI seems more and more to be larger companies like Open AI and Google. They get more mentions. They are super rich now and famous and have a huge team. They re-use their architectures across OpenAI5, the Arm, GPT-2, MuseNet, Image Completer, Hide & Seek. And they give demos with incredible results. I have some cash, and I'm thinking of giving it to them. Hmm. Nasty black hole. We know one thing though, black holes explode when they get to big. Leak.

Anyone else got cool AGI projects going on? I don't c any.


If you have 2 players and a chess board, it is indeed a tree Andey, yes, because think of a camera taping the 2 players at the table, it has a finite run as a movie, and there is only just so many possibilities. The root of the tree has ex. 129 possible states, and each of those has ~129. For a given Agent, his probabilities voting on a candidate choice of the tree branch weigh in and he picks a path. He may not EVEN see an option, oops he says i missed that path, and he may see a branch but not see votes of why to take that path.

It's a tree, although to the Agent it is just always feels like the current state where he is and that he is looking for choices + reasons to do one of them. That's it.


Yes i thought you meant that. Nonetheless as time moves, they are a different path :)

parallel universes lol

For that moment when reached, it will have votes (and a history of where it came from, meaning destiny path is better)


Self supervised learning is also when you talk to yourself, you squeeze out and mine gold from what you know, so it's supervised. but not by us lol. Itself! Unsupervised! Anyway, it's learning, ok? Internal learning.

Imitation learning is faster, you pass down and replicate wisdom, not emerge.

Both are needed!


Game producers don't make pikmin etc because they just there for te money like movies etc are, they make mario games the most, what most/ kids want, whatever works.


i love how in the mirror im often a different person lol in the dreams
sometimes i feel walls dude
my tactile senso receptives are wirelessly graphed to wall
it was so weird
it was a bridge wall in a sewer to ve pecise
greenish
sometimes im in my kitchen and im saying, mom, mo m look i can lift off the ground if i pull my feet up enough
and i fly and levitate in the kitchen
or jump off walls
i thought it was real life
hapened over like 6 times
if you think about it all day you will have sexy dreams lol
i never tried tho
why waste a wholeeee day to do it?
is it worth it?
maybe it will LAST LONGER
i had a girl for like 10 seconds dude
dream changes
if you think all day of it....it may stay
i can daydream it tho
i can see her n me right now
but faint
not vivid
so if i go 2 sleep, it will show up u see,
i did it once
but it was an alien
i seen a little stub like a bat pole pop up behind fromt of my bed with a litlle face on it dude, creepy
i went to bed
and i seen it for real, no different!
it works
understand?
you can force dream to be what u daydream allllll day
make it one thing
dont chang daydream
you hav no control unless get skill at it
i had 2 penises one time, felt them both
when you full bladdar my dream is always me lookin for bathroom
in a university once looking
all u gotta do is play a audio whyile sleep
you gotta use voice systhesis
to get it to say it
with the right accent
you need headphones
imagine hearing a woman all night?
yor dream will be influenced all night
it would work!
"Think about your perfect sex dream and actually picture that fantasy playing out. "The more you can make it seem real, the more likely you are to have that dream at night," says Dr. Winter. The hope is that your body will pick up on the physical cues you give it and render them again at night."
"You know you've had an amazing sex dream when you wake up feeling ridiculously happy. If only you could make it happen every night, right?! In a perfect world, you would. But in the real world (sigh), you pretty much get the dream you get, including the one where you’re naked in your school cafeteria."
"While you might never be able to fully control your dreams, it's absolutely possible to train yourself to have more sex dreams, says W. Christopher Winter, MD, a board-certified sleep medicine researcher, neurologist of Charlottesville Neurology and Sleep Medicine, and author of The Sleep Solution."
"Experts don’t know exactly what causes sex dreams, but it’s generally thought that dreams have something to do with what’s on your mind, either consciously or subconsciously. Sex dreams, then, could be a reflection of what you’ve been thinking about during the day or even repressed desires, says Dr. Winter."
"There are obvious perks to having sex dreams, like getting laid without needing to lift a finger (kinda), but sex dreams can also help you relax, lower your stress levels, and give you a sense of calm when you wake up, says Jess O’Reilly, PhD, a sexologist and author of The New Sex Bible. They can also help give you greater insight into your sexual desires and inspire fantasies to act out with your partner later, she explains."
"With all those potential benefits, it's clear why you'd want to have as many sex dreams as humanly possible (as if you needed to be convinced). These tips might actually make it happen more often."
"
1. Think sexy thoughts.
'decode your sex dreams
An Expert Analyzed Our Sex Dreams—And It Got DEEP'
Because your dreams are usually some kind of reflection of your thoughts when you’re awake (whether you’re aware of them or not), thinking about sex during the day can translate into your dreams at night. "If you can constantly remind yourself of the topic, you’ll be better off," says Dr. Winter.
2. Visualize your perfect dream.
Think about your perfect sex dream and actually picture that fantasy playing out. "The more you can make it seem real, the more likely you are to have that dream at night," says Dr. Winter.
3. Act it out IRL.
Dreams tend to reflect what’s going on in your life, so it doesn’t hurt to try to play out your fantasy (as best you can) while you’re awake, says Dr. Winter. While you can do this all in your mind, it can also be helpful to set the stage—mood lighting, candles, sexy music, vibrator—and go to town. The hope is that your body will pick up on the physical cues you give it and render them again at night.
4. Slip into sexy lingerie.
Besides acting your dream out IRL, you can trigger your brain into having sexy nighttime thoughts by sleeping in silky lingerie—or nothing at all. "It’s important that when you go to bed, you have some kind of a routine for setting yourself up for the dream," says Dr. Winter.
5. Set up the dream as you’re lying in bed.
Play out the dream, step by step, as you're lying in bed, says Dr. Winter. Think about meeting someone at a hotel bar: what you’re wearing, what they’re wearing, and what you’re discussing. Then, segue into the naughty part, focusing on the details. At some point, you should drift off into (super hot) dreamland.
6. Practice, practice, practice.
It's unlikely that you’re automatically going to have the sex dream you want the first time you try (although if it does, awesome!). "You’ll want to rehearse the same thing every night," says Dr. Winter. Don’t be discouraged if it takes time.
As soon as you wake up, write down what you dreamt about. "Even people who say they don’t dream will write something down from the night before," says Dr. Winter. Your brain dumps information over time—even within minutes after you wake up from a dream—to make room for more important stuff, so you’ll want to act fast.
This is an important step in having more detailed dreams. If you journaled that you walked into a hotel bar, saw someone interesting, and then woke up, that can help you build on the narrative for next time.
"


"1) 64-bits (8 Bytes) of data can "flow" thru the bus at any given time. This data is "Pushed" thru the bus each time the CPU "clock" ticks. If you're curious, a CPU "clock" is usually an internal crystal that vibrates ("oscilates") when it is electrically charged. The osculating crystal produces an electronic "wave" which is then transformed into a rigid, "toggle", wave form. This is the heartbeat of the processor, and each beat "pushes" data thru the bus (simply put). For example, a 1Ghz, 64-bit CPU can "push" data 1 Billion times every second, and it pushes 64 bits each time. Therefore it's processing power or "bandwidth" is 64 Billion bits/second or 8 Gb/s "bandwidth"."
"they just transmit a frequency and accept a frequency, then all the commuinication shares the same space."
a billion crystals/transistors vibrating a billion times a sec and generate a wireless wave that goes to all crystals.....but how do each then get a unique own signals???
but we need a rule. to say if is not 1 then send 0, but diff for each crystal
i get that each qubit effects all other million qubits....but how does it unique each one and process?
i bet you can build a qunatum pc without being qantum rancher, using wireless radiowaves, or wires, same effect, no need to be so micro small though!
each qubit is entangled with all other qubits, acting in superposition, right?
my 1 thing i need know is how do you tell each part of the machine what to do if it does most by listening to other instructions?
a quantom pc they made has qubits that hear each other, if they hear each other though, it is copying the bit
2 qubits: 110011001100110011001100
3 qubits: 111111100000001111111000000011111110000000
4 qubits:111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000001111111111111111111111111111111111111111110000000000000000 00000000000000000000000000011111111111111111111111111111111111111111111111
you get more and more compute, the more qubits, its exponential, BUT, you dont get unique compute, only a few, very few
if all qubit nodes talk to each other,, how do YOU tell them what to individually DO
My point is this, you have a processor, like a mordern cpu, and you tell it to do something for 5 seconds. How many DIFFERENT processes could you do in 5 seconds? Let's say 800. Now, if we look at a quantum chip, it makes qubit nodes talk to ALL others. It DOES do more, but they are not unique processes. So instead of bein able to do 8 trillion different runs, it can only do 8 thousand
"A quantum bit, or qubit, has two awesome and confusing properties. First, it can set itself to both 0 and 1 at the same time. And second, it can commune (or entangle) with other qubits to compound this ability."
you have 3 qubits, 8 combos you could run, such big number so few qubits, how do you tell it which to process?
ok so, computing is for extracting thngs you dont have!, you have a file. you enter it to cpu, you get expanded file, HOW?, its written in machine code!, it does transistor functions of math, add, subtract, multiply, divide, AND, NOR, XOR, right?, so it has patterns, expands, ok but....what if the profram isto um,, generate me 5 billion gpt2 passages in 1 minute?
what about a neural network processor, multple operations at once using others, a hierarchy / hetarchy, so the computer nodes itself are a shared hierarchy, to do the add, subtract, AND, NOR...encoding
backprop?
If a file is fed to a processor to get a result, can we predict what the result is for all inputs?
what about a cross lines that make signles of light beams separate along it? Re-use!
--------------------------
Ok so I have turing tape plan, suction trasport of pulses, hierarchy, and combinational gravity
&
Let me know if I'm correct, or where wrong, The RAM sends its bits to the CPU and the assembly codes recognized are making the ALU do if-then math, like adding/moving +/- numbers, and doing yes/no rules? It also stores and recalls values. And has counters for Reset, Next, GoTo address. There's synchronization, clocking, and feedback loops. Where does the million transistors for a CPU come in?? Seems very parallel like a GPU to me.
&
Ok I understand the Turing Machine FINALLY. If this is all that is needed, and a CPU is sequential, then why does a modern processor run any faster if it MUST do these elementary Turing functions? CPU is actually parallel!!??
And again, why do we need this AND/OR/NOR gate setup? Is this just making the common functions available instead of turing em each every time?
https://b1391bd6-da3d-477d-8c01-38cdf774495a.filesusr.com/ugd/44046b_f2c9e41f0b204a34ab78be0ae4953128.pdf
It seems like a computer has a data, program, hardware ALU with control/decoder unit (just makes common tasks fast?). The buses are data, control, and address. Every input program has an output result is key here.
Or I forgot, a program file can loop, and over the data as well......but still....if the CPU looks at more than 1 bit/byte at a time that is cheating, its not sequentially by any means I think....
&
Ok I give up, the sequential CPU is parallel but only so far as it has to ex. predict the Next Word before it can decide the Next Next Word, like in the GPT-2 algorithm. Modern CPUs have a lot of parallel components, and word registers as well to speed it up. And their quantum computer uses teleportation to speed up the signals. I can't beat that.
Btw If they use the qubits to go exponential computation tho that cannot be, because yo need to define which bits you want! Ex. 10100001010101111010......they suggest 3 qubits give you 2x2x2=8 combos, but only 3 controllable bits. So I just debunked it, there you go. They are using the quantum computer to speed up signal traveling, not to get more from nothing, as all bits must be defined. You can't have each qubit in both states and define a bitstream! What? 4 qubits = 1111111111111111? No. They need to be different!
&
So Google's quantum computer can only be speeding up signal transfer or upping the parallel count. Think about it, they say their qubit count per transistor or pc is ex. 52 bit, which gives it ex. 1111111111111111111111111111111111........ bit combos..... but you can't define the strng ex. 111100001111101011001011110101010101010101........it only scales it. To get more compute done they need more components or speed of the components period. Or optimization. They can skip computing using common codes, but they can't plot 52 qubits and think there's enough info in the 52 bits.
My 3 ideas were hierarchy computer, brain-like computer, macro qubits (wirelessly antennas).
https://www.livescience.com/google-hits-quantum-supremacy.html
&
Reading the link, it seems like the whole quantum computer has 52 qubits, each in 52 states? Like rotation, location, size, etc. So you have 52 bits where each is made of 52 bits as well. Just one qubit can be rotated offset Yes, over here No, and so on....10100010111010111. In fact the qubit rotation can be millions of possible rotations. Anyhow big number and they then capture by wireless at one place all their presence and sum it up to a number. I was going to use antennas.
What about having little antenna capsules that each have 10 bits in them each. Your 100 bit sequence is wirelessly sent to a crystal that picks up the count/wave.

-------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------

To quote some people:

"
I. J. Good 1965:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind [...]. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
"

"
Tim Tyler:
However, as I argue in my essay on the intelligence explosion - this explosion is really just a small part of the existing ongoing technology explosion.

Explosions start suddenly and diminish over time. Doesn't the current growth in intelligence accelerate gradually?

Actually, explosions start gradually with an exponential growth process, and only after an extended period do they gradually peter out. The metaphor of an explosion looks pretty appropriate to me.

Dawkins also uses the metaphor of an explosion for the phenomenon - in the chapter of River out of Eden entitled "The Replication Bomb", saying that - while some stars may "go supernova" - stars harbouring living systems might instead "go information".
"

"
Nick Bostrom:
Emergence of superintelligence may be sudden. It appears much harder to get from where we are now to human-level artificial intelligence than to get from there to superintelligence. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly. That is, the transition from a state where we have a roughly human-level artificial intelligence to a state where we have full-blown superintelligence, with revolutionary applications, may be very rapid, perhaps a matter of days rather than years. This possibility of a sudden emergence of superintelligence is referred to as the singularity hypothesis.
"

"
Eliezer Yudkowsky:
From the standpoint of existential risk, one of the most critical points about Artificial Intelligence is that an Artificial Intelligence might increase in intelligence extremely fast. The obvious reason to suspect this possibility is recursive self-improvement. (Good 1965.) The AI becomes smarter, including becoming smarter at the task of writing the internal cognitive functions of an AI, so the AI can rewrite its existing cognitive functions to work even better, which makes the AI still smarter, including smarter at the task of rewriting itself, so that it makes yet more improvements. [...] The key implication for our purposes is that an AI might make a huge jump in intelligence after reaching some threshold of criticality.
"

"
Tim Tyler:
We already have an intelligent system which modifies itself. It is known as "the man-machine symbiosis".
Machines are already heavily involved in the design of other machines. No-one could design a modern CPU without the use of computers. No one could build one without the help of sophisticated machinery.
"

"
Tim Tyler:
Machine programmers
Machines do not yet make expert computer programmers - but they already contribute massively to computer programming tasks:
Refactoring: Refactoring involves performing rearrangements of code which preserve its function, and improve its readability and maintainability - or facilitate future improvements. Much refactoring is done by daemons - and their existence massively speeds up the production of working code. Refactoring daemons enable tasks which would previously have been intractable.
Specification languages: High level languages are another example of programming by machines. When the author started programming, everything was done in machine code. With 32K of RAM - most of which was devoted to screen memory - optimisation, compression and memory management occupied a lot of programming time. These days, most of the donkey work of programming is performed by mechanically - humans get to work in a cushy high-level environment where they don't have to bother themselves with tasks like collecting garbage, or deciding in advance how big their data structures are. Ultimately, specifications will be translated into code in this way.
Machines also automatically detect programming errors, and automatically test existing programs.
Machines will gradually get better at these kinds of computer programming tasks - taking over an increasing quantity of the load from humans. This includes tasks that involve modifying their own source code.
"

"
Tim Tyler:
Of course, we have had brain-computer interfaces for decades. They involve keyboards, mice, eyes, arms, hands and software.

Won't there be a sudden speed-up when sluggish humans are finally eliminated from the loop? Probably not. By that point, machine "code wizards" will be writing most of the code anyway - so progress will already be pretty rapid. However, humans will most likely want to keep an eye on their budding machine programmers for a while - so there will be regular code reviews. As confidence is gained in the results, the reviews will be needed less frequently. So: humans will probably not drop out suddenly - but rather gradually, with increasingly-infrequent compulsory code reviews.

It is a fallacy to argue that today's machines are designed by humans, and - since the intelligence of individual humans is not increasing - the intelligence explosion has not started yet. Today's machines are actually designed by networks of humans with the help of machines. The machines are currently improving - and so are the networking technologies that link humans with other humans and with machines. Machines pre-process our sensory data, and post-process our motor outputs - and the results are smarter than we are alone.

A kind of corollory of the point made in this essay is that there already exists an intelligent self improving system - namely the man-machine civilisation.
We are in the midst of the "intelligence explosion" now. It is part of the ongoing "technology explosion", which affects many aspects of technology - not just intelligence.
The "technology explosion" is really just another name for evolution - with pioneering adaptations being seen as a kind of natural technology - put together by tinkering, rather than by engineers.
The technology explosion started long ago - and will likely reverberate into the future for some time to come.
"

"
Albert Einstein:
If you can't explain it simply, then you don't understand it well enough.
"


Human-level intelligence or "AGI" (Artificial General Intelligence) has etched closer with recent advancements in the Computer Science field:
https://openai.com/progress/
https://www.youtube.com/user/keeroyz/videos

If we could more clearly understand what intelligence is, we could make AGI emerge sooner. The definition of what intelligence exactly is is still up for debate and has various (although similar) understandings (and many algorithms):
https://arxiv.org/pdf/0706.3639.pdf

Combing the definitions in a clear manner: An intelligent machine has Darwinian-related goals (for Reinforced Learning) to test its survival ability that it seeks to achieve/solve using limited resources and time all while faced with a massive search-space of possible steps for it to take. Using limited knowledge and skills (missing data) that it learns from its parents through many imitations and from its environment through many observations, it must learn a circular web (a heterarchy) of related meanings (where dictionary words explain other words... a tight small-world framework) so that it can compress lots of data (by learning patterns in the data) to allow it to solve complex problems by abstracting them, and same for learning a hierarchy where letters words and phrases are re-used to build larger items. After training, it must then pay attention to the corrent hints (patterns) to recognize (adapt-to) unseen situations and do so again to generate (i.e. remember/ reason/ predict/ plan) what is most commonly done, related, and favorited (simple answers, ones that usually work ex. use genie, check twice, etc) to do to solve these unseen problems in a wide range of environments (and then adapt the solution as needed) by leveraging past experiences that are similar, so that the organism can successfully find food and mates to survive per Darwinian Theory.

Hence the name Artficial [General] Intelligence is an AI that is robust/flexible by using universal law patterns and can achieve its hardwired goals in a wide range of unseen contexts such as learning to open a fridge and then generalizing to open other types of handles in new lighting conditions, higher gravity, and so on. As OpenAI showed, training in simulation under many random life-like conditions trained it to learn patterns well enough to generalize plus transfer from simulation to real world. An agent may need to change its environment or itself to achieve its goals, it learns to adapt and know what to do so by utlizing extra context. AGI can be specialized however, like humans, losing touch of the G in AGI, able to make anologies and use hints but has too-limited data and hence low generalizing ability. To attain and surpass human generalizing ability this can be remidied by training on diverse and big data, as OpenAI showed in GPT-2 and the robotic hand. The training in these algorithms let's the AI's brain layers learn patterns found in large amounts of data. For example circles make up cat eyes and buttons in images, and cats look similar to dogs.

Some definitions I referenced above completely lack details such as “Intelligence means getting better over time.” because Better or Good are contextual for a species's hardwired goal, while other definitions correct the wrong ones such as "successful intelligence within your sociocultural context — meaning that people have different goals for themselves" (such as people's careers or animal's unique ways of breeding). One proposal was the Turing Test, to appear intelligent and fool humans. I will clearify below what intelligence trully is. Good, Better, and Intelligent are trully anything you want them to be but there is only one way things work out or 'flow' in the end and it's certaintly not the turing test! As it can easy be annihilated. No arrangement of particles is better or ugly or fun or conscious as we call them. There is physics at play in what is 'Better'.

Something primitive. A common thing to find on Youtube is little creatures mutating in an evolutionary simulation that learn to walk etc using just random motor actions, an accelometer sensor with reward for speed/time, and a neural network. Some wonder, what if they will go from walking to building rockets? Open AI showed a yet better result that even crossed my mind:
https://openai.com/blog/emergent-tool-use/
What these red and blue agents are doing is trying random actions over millions of Darwinian 'games' and are getting reward when they do the correct behaviour. Then the other team has to stumble on the correct behaviour - recursion is what makes it progressively get 'Better'. They do use hints, patterns, as we talked about to skip doing too much brute force search. Like possibly running around corners, being near their friend, copying/synching their behaviour, carrying boxes around is useful for rewarding bahaviours and therefore is more likely to learn a higher-level behaviour based on 5 it is doing simutaniously. They have a visual field that pays attention to key features that are frequent, related, or favorite, just like GPT-2 for text. They are using vision, this is great, and they use patterns to search through big space, great again, but what could be better than testing an immediate decision one after another until you get it right?
Humans are constantly thinking in their brains (using vision and sound streams in their working memory) what they want to do, what else it could mean (translating aka recognizing unseen sequences), and seeing how to do it or what could/did occur (entailment or 'expansion') or what could be excluded (summarization), including segementation (the correct re-used nodes in a network). This happens in all sensory types. Cats drink, they look like dogs, are segmentable from the background. The patterns we learn model the real world, and allow us to simulate/imagine what will most likely occur using probabilities, just like GPT-2 does for Next Word prediction. It allows you do do 'internal discovery' without having to primitively act in the real world immediately with no planning and also requires no need to do external Research & Development constantly either (you do check for updates for some things, but if no solution is available on the internet, then), meaning you can extract more new data in your brain without any external insights as long as you have a lot of data. Simulating/answering your own new branching questions/goals allows you to see an accident occur, allows you to be someone else, to try alternative paths, to retreat a step back, to judge which choice is best, to increase the size of your tool, to try something faster/ cheaper/ safer/ impossible that you can't do in real life, you can even run faster or speed it up in your brain. Sight-seeing in games or dreams lets you add augment your data and expand/extract it from compression. The human brain doesn't just respond primitively to cues anymore, it simulates its very own world. Intelligence is using data to learn patterns and then find solutions to unseen issues. R&D is still critcal, you do recursively test the prediction or do a test if you can't generate an answer internally from what you know. The same way we generate data is the same way we, given an answer, judge it and verify it, however is faster.

Paying attention is really important
https://arxiv.org/pdf/1706.03762.pdf
1) When a noise in the room starts buzzing, we pay attention to that noise more.
2) The goals on our minds that pop up even have priorities of which to do first, usually its food etc, how do do your job better, how to repair your house, for me I wake up every day asking questions like How does AGI work? How can I get laymen into this field quicker?
3) When we have canidate words to add onto an unfinished text story like in GPT-2, we pay attention to related words more to narrow down the search space, frequent words, loved words, their locations/global structure, using abstraction/sequence vectors, and we do this during recognition (translation) as well, including adapting the solution. Attention, context, patterns are so important. It lets us know a solution that we didn't have the answer to, by paying attention to the correct context. The more extra context you have and the better quality it is, the better you can repair. Then you can repair faster once have repaired. And so on recursively.
If I learn 1 pattern, I can answer 3 unseen questions by generalizing. If I learn 2 patterns, 23 questions because each pattern enables me the answer and some answers are only discovered when I know both patterns. 3 patterns, 489 questions. It's exponential. If I learn 1 pattern and can answer 3 questions, then I just learnt 3 new patterns. If I keep going, I may become ungrounded in a dellusion of reality.  If I have enough data, I don't need to physically test or grab new data. The more diverse data/patterns it learns (or random motor actions/patterns it tries) the exponentially more it 'learns'/can solve nearly any problem without needing to learn any new data. The more patterns it learns (where words explain other words in a dictionary) the exponentially tighter it can understand everything using everything, the network's understanding of the world comes to an approx. equalibrium and "knows all" without actually knowing all! What happens is you can answer unseen questions exponentially more likely if you have more diverse patterns learnt. So our goals, our questions on our minds, may not be most of these questions we can answer, but eventually we'll be able to get to stage 2 of our issue. Then stage 3 even faster. And so on. We'll never know all but we can answer any question approx. optimally. Only the questions we seek to know. This is why big data is so important, evolution is exponential near the end, we generate new data from the old so fast because we have so much context/patterns and the data/reflection is building faster the more data it adds... Notice it is the communication/ information/ computing technology that is getting 'Better' in the last few hundred years, and so is the ability to prevent death by using knowledge, communication, tools.

For both motor choice and knowledge thinking, Hindsight Experience Replay (HER) allows you to store actions or facts for later use. You may guess of 10 doors the wrong 6 doors, each with a unique object behind the door, but after 10 checks you have 10 answers/facts (or all 10 basketball moves (hoop score, hoop boomerang, hoop miss, teammate knockout, etc). These bag of tidbits are not just useful for later need, but also useful for mass context voting in Self-Attention bag of context for downstream tasks like having 100 words in a sentence disambiguate a word to translate/ fill in.

We need to simulate plans in the world in its brain using lots of data, text, images, etc. It needs to be like GPT-2. If it's just a hardcoded pre-program or knows very little data or only does primitive cue>response behaviour without thinking globally of all sorts of things and when it needs to or can't abstract/expand data, it won't do-all and anything to solve its 1 goal, Survive. This thinker also needs to chain its goal/question n its mind and attempt to branch it by building new branches/data based on old data.

In our universe, everything can be called data. It's all data. With patterns. There is only a few types of elementary particles that make up our worlds. None of the sentences in this universe are 100% unique, nor are images, or any other data humans work with, the same word pops up, a lot. For example the sequence "My dog saw a dog chase after a new cat.". Cat=dog more than cat=bird.  A lot is possible in the univverse, especially virtual reality physics, but not everythng is possible here. Things are predictable. Every time a planet gets too large, no matter where the atoms are sitting or what type of atom they are, it will become a star and get too hot and give off the mass it gained back after robbing it. The same ending, no matter the starting conditions. The atoms are affecting each other, like qubits entangled in superposition or word2vec, it's a combinational explosion from patterns/heat. The combinatinoal explosion is precisely what brings the planet to singularity to become a star and burst. Imagine a hungry cat in a room and a bun placed anywhere, the cat ends up back on its bed to sleep with a full stomach. Many patterns are on Earth, everyone eats, showers, breaths... Other planetary  civilizations may have other things going on but like us the same physics will be happening for them all - replication, resisting death, a communication of informations data and competition and then cooperation. On both planets, you can build shelters, wash using water, destroy using fire, build transportation highways, a vacuum chamber, a lab, square homes made of only 6 walls. All data in the universe has patterns. An English language sentence has re-occuring words and related words. Vision, the whole universe, does as well and is language as well. Humans model these patterns, we think with vision and text about patterns. Humans actually invent words for objects and for action behaviours. The data is grounded. With so many patterns, the universe is predictable, that's why it has an approx. sigular optimal form by the end of evolution for a given system. Evolution on Earth is *exponential, not a *singularity. However, no matter the starting conditions, we do arrive at an approx. equalibrium and is hence an end-point or singularity that will always be reached no matter a tweak of the starting conditions. Lossless compression of the Hutter prize has resulted in 100MB compressed down to just ~14MB, yet the universe had a starting condition and has an ending result as well for a system of a certain size and the arrangement of particles it has, along the way in evolution every file we have compresed can approx. be regeneraged from universal law patterns, even if all is destroyed. The universe was the real compression, and now its expanding.

A too large atom or too large planet or too hot water are unstable systems and will give off bursts because of extra heat. Notice the large planet is not hot at first if you build your own, not at all, you can keep adding more, however it catches fire at the core on its own and says sorry, can't do that. Here we have a clear example of the exponential pattern explosion - the heat in each atom of the large planet exchange, like word2vec or supersosition entabglement of qubits. They share, like in a hierarchy or heterarchy.

There's 4 combinational explosions at least. I recognized the same thing enough times to recognize this. A planet becomes a star at the core first when there is too many atoms, because each tug their gravity (heat) on each other. A quantum computer has more computation the more qubits it has, same thing. A brain/computer can model predicts (or out right simulate physics) by having extra context: for later use, to expand it, to have more votes weigh in, to refine its hierarchy and heterarchy feature structures. A swarm of general nanobot cells can do a lot lot more there more there i of themselves, they can replicate exponentially faster for example - 1, 2, 4, 8, 16, 32...

A universally intelligent machine is possible in this universe. If you look at all of the Two Minute Paper videos I referenced, you will see they are all data completion or 'entailment', translation (recognition), summarization, segmentation, and same for images and audio and motor - frame rate fill in, word fill in, image fill in/extend, super-zoom in, style translation, text to image, video prediction, black to color enhancement, translation, summarization, recognition, which gives us missing data/answers to our Questions, better than bland motor trial and error can do. You can see the whole AI field is about patterns, big data, and is all about emerging/self-organizing, repairing/self-re-generating, and resisting death to survive. This is intelligence. Self-Attention using *more *recursively-refined data/context is intelligence. Immortality is intelligence. It's Darwinian. Humans/cells just try to survive and breed, nothing more. They will say everythng possible to resist Death and will attempt Breeding/Teaching information (or the male will at least). They will bring up the things they need to breed/survive: food, friends, mates, sight-seeing to eat new data, and try to say these things are more than special. They're not, it's just physics, the whole Earth is a machine composed of smaller machines and so on, all particles. Intelligence is about resisting change from equilibrium by repairing/regenerating. The data spreads and update each other as a voting context. Physics causes a lucky planet to survive, and by the end of its global planetary evolution, it resists death very well and can instantly re-generate if half destroyed. Earth will become nanobots, intelligent General cells that bred and can eat all food by changing its composition at the particle level. Anyone working on AI is working on rejuvinating regenerative technology. Generative models. You can't kill the best technology possible. Earth will become approx. globaly-optimally immortal nanotech once it reaches approx. equalibrium (locally, small organs will be general but can't do 'all', like a calculator can only caluculate) and will grow into a galaxy by eating surrounding matter. Have you noticed technology these days is exponentially improving faster? The data technology is actually. Death prevention is too as a result. The extra and refined context lets us all add missing data back faster and then what we have lets us add missing data back even more faster and so on (exponential regeneration, you can't kill it even if it was completely gone, it just emerges, repairs, and approx. (as all things in the AI field are approx. and usually use probabilities) resists change from equalibrium) and us human brains act as nodes, like a higher level hierarchy or heterarchy, not just for cells or brains. Evolution on Earth, although there is still higher mass of bacteria compared to human, is exponentially occuring. A resist to change emerges exponentially faster because it uses extra context to repair and add more context back which lets it add even more context back. It's a combinational explosion. We just might actually explode like a star. But the probability is lower as we repair and is what we will continue doing. Oh look some reference http://people.idsia.ch/~juergen/unilearn.html we can see here they say that AIXI can make AGI as well, just slower, more hints/data is better and as you learn more you can exponentially self-improve by knowin where to take new data from and what generative answers to generate.

When you extract a huge file as big as Earth, it learns by compressing tonsss of data in brains, to a point it levels off performance increase, overal of evolution the more data patterns it has the faster the extraction until levels off. S curve made of S curves. Extraction (prediction/generation) involves compression, but its just extraction really. To extracts, you need patterns. Maybe they take turns and we'll end up in a black hole that only big bangs like a star long afterwards. Like an advanced sun.

Physics is regenerating back the superorganism from nothing, no matter the starting conditions, same result! It can't die. Intelligence is the inevitable in physics. Like a brain, lost neurons don't matter and are in fact Dropout of bad genes, the greater swarm lives on and in fact replaces the blank/ or employee with a better or same but improved employee or simply has enough data that it has the same nodes cloned redundantly anyhow. 

Intelligence is simulating or 'running through' behaviour/data, i.e. generating data, predicting data, using patterns in the universe recorded from multiple sensors. The universe is all data, the Earth is evolving and recursively emerging exponentially faster because there's more refined data. Not because we are 'getting Better' with our new cell phones or kitchens or look cool or food tastes good or is superior to rocks. Intelligence is to reproduce and replicate yourself, and your ideas as well. It's about communicating information, resistnig death by predicting and regeneration. Prediction is regenerating. Data recursion and extra data leads to exponential repair. Revising, updating, adjustment, adaption, replication.

The universe is just data, evolution is intelligence, it's physics, the extra quality context has patterns in it that allow for exponential data recursion which results in missing data formed. Iteratively. The equalibrium reached resists change - death. Because it repairs missing data, it even emerges from nearly nothng almost. It can repair faster when it has more and better data formed. The larger it grows the better it can do so as well. That's some good instant regeneration ability. The whole universe is made of the same fundamental particles/laws, that's why things have patterns. When the cell emerged, it wasn't exactly lucky to get a mechanism that can replace parts.......physics allows it to emerge/repair, molecule chains link and DNA lines up, attracts, duplicates. Redundancy/patterns are crucial and a giant world is better off with fast access time to nearby memories meaning there will be many General cell modules in our future world, it won't be a single giant brain with 1 of each memory trying to be accessed from a trillion distant eyes anayling loads of data.

Evolution is exponential on Earth. It's just physics. Out of many planets, cities, animals, cells, certain lucky or smarter mutations survive and breed, or *persist. Resist death. That's part of why we are alive on Earth and not hit by large meteors, because we were lucky from the random chaotic process, we are here to see that we survived because we didn't die. Same for our genes. However we are not lucky to have the physics we do, there's no definition of what's Good or Superior, only what we say we want - water flowing down a river is happy to be doing so and has no choice. As said, fighting to stay alive and repair using food/mates is what makes yo think food and others look Superior, Alive, Sacred, Wise, Tasty, Sexy, Good.

In evolution of Earth from start to finish, it is exponential and is made of approx. exponential S-curves made of smaller S-curves that level off like the performance of a neural network once reach equalibrium. Not for the universe or other planets though, only the surroundings of a given Earth become extreme nanobot tech, just like the soil become cells after the first emerges! What is Better or Pretty? Not small programs or women! Nothing is. But physics has its laws that make 'progress' happen. And they're happy to be doing what they do. We aren't lucky to have our physics! Small programs or choice of action seem better to us right now (Occam's Razor) because larger systems are more likely to have chaos or less patterns. But later, larger systems/choices will be approx. optimal. Imagine choosing to cross the road and a murderer is on that road. Do you use a gun, or the megabot genie? He can't see you. There's a small chance you'll die if you use the gun. So you use the bigger system. Both systems could be optimal, like at the end of evolution, but the larger is more powerful. Larger systems can do things small ones can't. Imagine comparing a 100 nanobot sphere to a trillion nanobot sphere where both are optimal utopias, the bigger one will win. As such, larger systems become optimal exponentially faster, the size increase is exponential, Earth will grow quite fast. A cell however, not so fast, even a optimal general nanocell, not that fast.

Bigger systems are more powerful than small systems. Smaller is better, bigger has more junk, but once there's no junk it will crush the small system. In a sense, AGI is simple and efficient, there's a pattern from cells to brains to cities, but yes bigger systems are more powerful. More context is better however you do also need summarization, like a hierarchy where you talk about 1 node that does a lot beneath itself, which allows more context I think, and nanobots will form nanobots that form nanobots, then can fly in the sky while do tasks, a nesting of actions. Don't ever ignore having a lot of information, even if it's probably wrong, because you will leverage it as a bag of context and improve the more you learn.

I know a file can be compressed. I'm talking about de-compression in the universe. Larger systems that have larger context are more likely to survive. A utopian sphere can fight a small pirate UFO. Small systems are more likely to have less junk and are therefore the first systems that are the most powerful systems. As it stands, bacteria and mites and ants are basically everywhere and are undefeatable from their current stage for now. When nanobots come, all Earth will be transformed. In this sense, larger systems that have very little junk are what decompression is about in this universe. As that is what is forms is exactly those systems. Information evolves and churns itself using its own next state. The past predicts the future - the present.

My goal is human AI made in the computer. It must generate discoveries using past knowledge. It includes translation, entailment, summarization, segmentation. Compression enables its knowledge to be general enough to solve similar/translatable problems. Then its questions it answers are reward reinforcement and chains forward along paths as a branching tree like how can i gain food? Either go down path obtain money or grow own food. It's very explainable if you know the answers. After strong AI is made, it will replicate as nanobots, just as cells and mammals do. This ensures not only maximal computation/storage of information but also max life persistence. A fog of nanobots (group of humans) is like a brain, many neurons/brain nodes in the fog die each day but the mass of a group/the brain stays unchanged. This information loss doesn't affect the mass, and this is same in translation - some words are ambiguous in meaning but the surrounding words help translate the word to a better word ex. "please [stick] it to the door". (becomes [attach]).

The matter and energy of Earth self-organizes on its own until it reaches a global optimal form (technological god) and reaches the final equilibrium state. Cells replicate and terraform Earth, but aren't great food finders yet. Nanobots will turn atoms into other types of atoms and spread like cancer. That 1% of cells that are evolving exponentially faster are humans, and only like 5% of humans are working on science and making advances. That 1% is the number that shoots up to 100% very fast, when the nanobot replicators emerge. The optimal form of Earth is a god, and it can repair itself (self-organize) very fast and reach the global optimal form quickly again if damaged. We already know that our scientific human's progress is advancing at an exponential rate. The repairing is getting faster near the end. The final optimal state will be the technological god that is sitting at the exponential end from mass cells replicating fast and can resist damage/death (change from equilibrium) extremely well (except for possibly entropy heat death)

Ants etc are at their singularity and already made it, they're everywhere, so are humans really...bigger systems aren't yet though. When the first nanobots are made, these are not small actually, they are larger organs made of nanobots of course. It has a size and has an capability to resist death, to formalize this. However a nanobot will be much more capable than a tic, and can avoid meteors etc better as well. Ants only have a pretty good form being so small and a pretty good extinction defense. Larger visible swarms, are yet to come, that will be larger, and will be better at resisting change, at a local and global scales.

"all should share in common the attribute of being entities that decrease their internal entropy at the expense of free energy obtained from its surroundings. As entropy allows the quantification of the degree of disorder in a system, any envisioned lifeform must have a higher degree of order than its supporting environment."

What makes an agent such as a human brain or DNA have such behavior while the rest of the environment is like....tables, camera, food, spoons, floor, pencil...? It's like everything else is the result/decompression from the brain/agent. The brain/DNA is the central 'brain' making sure it stays alive and replicates, by decompressing, finding food/data, patterns, digesting/organizing its connections.

From Wikipedia:
"One possible conclusion from mixing the concepts of Kolmogorov complexity and Occam's razor is that an ideal data compressor would also be a scientific explanation/formulation generator. Some attempts have been made to re-derive known laws from considerations of simplicity or compressibility."

"Marcus Hutter's universal artificial intelligence builds upon Solomonoff's mathematical formalization of the razor to calculate the expected value of an action."

"The minimum instruction set of a universal Turing machine requires approximately the same length description across different formulations, and is small compared to the Kolmogorov complexity of most practical theories. Marcus Hutter has used this consistency to define a "natural" Turing machine of small size as the proper basis for excluding arbitrarily complex instruction sets in the formulation of razors.[66] Describing the program for the universal program as the "hypothesis", and the representation of the evidence as program data, it has been formally proven under Zermelo–Fraenkel set theory that "the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized."[67] Interpreting this as minimising the total length of a two-part message encoding model followed by data given model gives us the minimum message length (MML) principle."

We can see here it suggests small compressed files AND small decision choices are following Occam's Razor and are the best 'programs'. Allowing for generation, not just degeneration :). This is half true, a small system is all it takes to store something, or the universe itself. But when it comes to decompressing a universe or system and generating an action program based on reasoning/data, our universe does this by extra context or 'big data', using patterns. It re-uses all the stored compressed data, and all decompressed data, to decompress. This is why a large nanobot utopia can totally defend against a small UFO. When has a human died because of an ant? Rarely. However a radioactive particle can do damage, I'm not sure our future utopia can run away from light-speed blasts of energy like gamma rays. Maybe we just come back to life like ants, there's many Earths out there and there will be another again.

The scientific humans, will make the first AGIs. Which will in turn tell us how to make the first nanobot. To get to such a point, we need to make an AGI. But even if we had enough electricity and computers to run a million AGIs, they can't do anything we can't if they really are as intelligent are capable as us and no more. We need ASIs (Artificial Super Intelligence), many. Like evolution, this is exponential as well. We only must make 1 mostly-functioning AGI to finish a functioning AGI and give it faster thinking, motors, more memory, etc to be ASI. They will have more memory, can erase memories, longer attention, more sensory/motor types, more sensors/motors, better ones, goals for immortality, update their intelligence, stronger connections/relationship to friends, never get tired, bored, sleep, eat, use the bathroom, will focus, have scheduals/nested routines, various algorithms, any body or morphable bodies, strong cooperation, wireless communication of visual thoughts, each update each other with their general skills so never must learn (they can duplicate brains as well, giving birth to adult minds), can teleport the brain/body/tools using parellel wireless technology to a body of nanobots far away, clone, using a big brain to control a nanobot, can be immortal, and much much more. The nanobot swarm can fly in the air while groups do tasks while groups in those groups do tasks, all uneffected by higher-level motions. The swarm will be a wireless net of 'hands' and 'eyes' and 'brain' nodes all at the same time. Higher level intelligence, such as ASSI will involve not self-recursion of intelligence, but data-recursion. All of evolution has been doing this the whole time. Humans have already been working with machines to store data, make cars, help us program or make new computer chips, etc. We seen in humans the more flexible body and larger data intake around Earth and cooperation with other agents led to greater faster progress in evolution, including the simulating and predicting using the world model in our brain led to fast R&D.

The fastest way to up your compute, data intake, experimentation/manipulation, and resist death is to transform Earth into General/flexible/robust nanocells, highly flexible small modules that make up larger modules and so on. There will still be metal beams, nuclear chambers etc. Nanocells can still latch onto each other though to be bone.

Once the first AGI is made, it will be faster ex. by 50x using light, never sleep (2x faster), never get tired or unfocused (faster!), have more memory, attention, sensors, data, list goes on and on actually. 1 AGI will seem like 10 brains, [maybe] 1,000. It will find ways to improve neural discovery and research.

But we DO need more of them. I realized nanotechnology is the answer because although humans will begin to manufacturer much much much more computers it just won't be enough in time (maybe!). You can fit more in and utilize resources and make more computer faster. The first 1,000 AGIs running on supercomputers etc will work on extending their processing and data intake using mass scale easily fabricated nanobots. They will tell us how / tell us how to give them bodies/actuators so they can do it. They will be less advanced nanobots and more focused on making more computer storage|processor. This won't eat up all the Earth exactly...but later it will. Later, the most advanced corpus of nanobots will be multi-flexible and will turn Earth into a final optimal form. This is the most efficient path. It makes their existence highly unlikely to become extinct while giving them extreme power.

If you want to see a bit of what such a optimal organism looks like, take a look at T-3000 at the end of the movie Terminator Genisys. It will be as amazing as the moment the first natural cell emerged on Earth :) This artificial cell is much smarter though. It will replicate much faster like cancer. It will find more edible food than previously.


CONTEXT BAG (COMBINATIONAL EXPLOSION OF WISDOM/POSSIBILITIES):
The bigger a meteor is the more context (insights) you can get, not only do you have a lot more Timbits the bigger the meteor is but just after a few more particles are on that meteor you get a huge explosion of a lot more combinations of context that can all vote and weigh in on a solution and allow you to generate a Next Word. I can babble about stuff and just talk about it and it might sound stupid at first really stupid but it doesn't matter because I can use my bag of context and it works, I can actually make a lot of discoveries iteratively and be saving them and then reusing them to make deeper ones without going to look at the real world to test them as long as I have a large enough knowledge base to have that explosion effect. You can translate a word in a sentence or even just fill one in using context ie the more words in the sentence the better you can do it, you can say it like the cat was eating food or the Tiger was eating food if there's a lot of cat words in the sentence, or if the context of the sentence is about a product at a store you can say I was going to buy the cat product or you could say I was going to buy the wheelchair product, so you see here the word to fill in a Translate/transform or add to the sequence to the sentence can be totally different based on the context because it's similar and context and not just directly word to word wheelchair to cat.

The higher you look on the level the more structures and effects you can get, there's a lot of different effects and theories in physics from Quantum to the strings to tons of types of atoms and particles to the Higgs boson from static to even tons and tons more of molecules and so many different types of objects from animals to cell phones, yet all we need to look at it's just one atom, because the atom is made out of all we need really. the periodic table only has two hundred or so atoms and the bigger ones are just more of the same stuff the electron the proton neutrons and the photons. I'm starting to see patterns here in my data by the way. There's only just so many types of atoms because the larger they get the more unstable they get. If we look at the higher-level we see the same effect, when planets get too big and have so much mass this causes a greedy hoarding in fact but they actually give it back thankfully, they got so huge and they explode they burst they ignite they give off heat waves because they are unstable and the probability is high that they will give off. These planets become Stars. They give off radiation just like atoms will. They also have orbits of electrons. The same thing must be happening to make them unstable when they got too big but it always starts with the atom, the radiation that comes out of the atom in an unstable atom is boiling off has a heat burst just like boiling water or even in space a boiling water spear. This happens because the random motion in the clump and the atom or the water is that a very hot temperature and has more motion and the glue magnetism isn't has holding it together. Now of course this can happen with a solid too if it's extremely hot however we're talking about a planet or even just an atom that is boiling off radiation but it's not extremely hot, and therefore the reason for the unstable burst to occur in the atom is because they're actually indeed must be more heat than there usually is in a smaller atom type, and indeed there would be if there is more particles in the nucleus of an atom, all the heat motion has to do is move from one particle to the next and suddenly you have a un-aligned motion heat burst but one wants to jump off the other way and has enough to do it. When this happens it is an unstable Atom giving off radiation resisting to grow any more into a new type of atom and will be short-lived but would be able to possibly grow larger if cooled, or in the case of a planet becoming a star it will start a chain reaction in the core and the planet gives off extra heat that which the glue is not holding these particles together. Now you should be able to get a larger planet if the planet is made up out of smaller atom types and not such unstable ones. Atoms and planets both have orbiting moons. We have seen here atoms will explode when they are too large or if not still do if there's a huge amount of atoms surrounding them like on a planet. It could be possible a Galaxy could explode if it gets too big even though the planets are not overly big and the atoms are not overly big. The center would get hot.


Earth will evolve into an approx. optimal final form by self-organizing physics. The final state is the same, no matter the starting conditions? If this is robust general Self-Regeneration, it should be so. Let's take a computer simulation, a ball starts displaced and moves towards the bottom of a silver cone ditch by gravity force. No matter if we place some pinball bump nobs on the slope, it will reach the bottom hole and sit there at equilibrium!

The universe/world is selfRegenerating from various starting conditions, it's general physics, the general intelligence is one that is replicating and resistingdeath because it is repairing/spreading using a lot of context. It's general physics. Although its possible there's true randomness in particles and not just apparent random behavior, it may be that our movie/destiny is set from the start and is only playing out. However now we know it is indeed a general self-organizing physics. We have simulated car crashes in computer as information without needing to simulate atoms, and we can predict them perfectly or change the result using randomess from clock time. Computers also can avoid the true randomness if it exist because they use redundancy (the same info replicated many times) and computers already use this technquie to be perfect and digital. Issues lead to innovation.

Through mutations and more data/larger brains, bigger systems that survive and breed more emerge. Resources on Earth are finite, evolution refines us with better versions of ourselves. We die because evolution wants fast cycling/ mutation. In with the new, out with old resources. It refines the structures/overal structure. Eventually longer lives are attained and les 'change' to the system occurs.

Generative Models. Rejuvenation. Negentropy.

Evolution gets faster near the end. Exponentially.  At the end it resists change. I'm not talking about the grand splash where heat death results in only outgoing resources and no further incoming meteorites or whatever, I'm talking about the technological final form Earth is shaping into, a globally optimal god made of nanobots, like Terminator T3000 in Terminator Genisys (see end of the movie). This god, is the most optimal structure of matter and energy, it can only grow in size now. This is physics equilibrium. It resists change, it has settled. There will (hopefully) remain lots of flexible Brownian Motion (else the utopia would be almost frozen dead of entropy). Why do you think evolution gets faster near the end? Why do you think this god resists change and seeks only to grow in size? Let me give you a hint - if it was severely wounded (45% blown to bits), it would near instantly regenerate back into form - like a faster, smarter surgeon able to replace an organ with a transplant much quicker. It's heading to the same state again...it's simply repairing. And it does a better job when near fully emerged.

Intelligence=Physics. This is the key.

Some humans want to never be immortal, some want it, some want other things, some suicide. This is normal. But globally the world is heading to a state that regenerates itself, or destruction if not careful. The end of evolution on Earth, the final destiny in physics.

Anyone working on AGI is working on Rejuvenation Technology. Self-emergence, self-organizing, self-attention, self-generation, physics, negentropy are the keywords here. Repair. The universe has nonrandom information and patterns are learnt that allow it to add missing structures. The universe, DNA, and ideas are all information and are evolving. Iterative regeneration leads to self-learning, goal learning, predictions, repeat. After repeating, attention on information is self-refocused. This includes nearby friend nodes in the nanobot planet, it will updates friends through connections.

Check out Open AI's website and search Google for "talktotransformer" to try it. Look up Self-Attention illustration of the Transformer architecture. It's general and can regenerate text and images and music BACK. - It does translation etc - It's a multi-tasker also and is an unsupervised learner. It can scale too. In the near future, I predict it will use not just context of the text sentence, but images also, to predict the next frame and word. This is a pattern - Glove does this already. Here it would be working on multi-sensory data, and lots of data also. 

With nonrandom data in the universe, and a certain physics destiny, everything is working on its own under predictable laws. When you have data that proves cat=dog, this too is predictable and if your device is the right one - it will model predictableness. The universe is heading towards equilibrium, which means you end up with a megastructure that repairs and resists change, unlike all the devastation in the cosmos. Such instant-regeneration stems from a globally-optimal haven that is made of smart cells, some general and some specialized, including debree like metal parts in between. Equilibrium is instant-self-regeneration. Repair/revisement is the keyword. You fix, add more data, learn. You do need real-world observations and real-world implementation of storyPlans you dream in your sim model. Your predictions help add more for the lack of real world data.

All of evolution emerged on its journey through which new soup resisted change. Breeding fast resists change. Being a table doesn't defend itself or remake itself or repair itself. The Earth resisted destruction of many deaths, the cell did, the human did. They did come together on their own, they literally attract together - mass, mates. The self-organizing/emerging universe state ends up resisting change, until then, change will happen dramatically. Less change will happen near the end, but by that point the instant-regeneration megastar will have nearly appeared.

You thought exponential evolution was just because we were  smarter. You think - ah, cell phones, more better data faster, more humans... But now I pinpointed it.

As the world gets closer to the equilibrium in evolution - the end state, it will regenerate and emerge exponentially quicker, because it is almost here, it will add a block back on it was missing, then once there is 2 blocks it can add TWO, then 4 once 4, today we add ex. 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000 every year instead of 1. The whole world will become nanobots and can instantly regenerate if half was blown up. It will then grow and eat planets.

The larger it is, the more it can repair. The faster it can move. If you have a planet where half is only soil and grass, it is not as fast or has as many workers. So full utilization AND eating nearby planets results in faster better rejuvenation.

The universe starts as mostly random data, and through informational patterns it is able to predict and re-generate itself back. Synchronization of cells, feature nodes, agents, at all levels of hierarchy, increases as evolution happens. They become aligned because of patterns in the data. Just like rocks in a bucket, metronomes on a platform, words in Glove, religious doctrines where yo grow up, they equalize with each other and sync up to a similar state and allow propagation through a hierarchy or heterarchy so it can build higher up nodes - like a pianist can easily. These vibrations are also brain waves. Random domains in a brain or country are present even in older-age equalized ones. Synchronization i like Attention, they all look at each other and prepare for Translation, to adapt an get better decodings/encodings for this layer.

Just writing this text story I am making discoveries and re-organizing my fact parts better. It is a self-emergent property in physics using information and uses Self-Attention technique in AI to do the predicting of the next likeliest word. It works on images and music too. It is a general network architecture OpenAI found. Brute force search can generate & verify ANY algorithm but costs too muich time the bigger the problem - you need hints, experience, data! But you need not random data, it'd be useless, you need dat from the real world that has patterns. Like 2 sentences that have 'cat' in therm, where 'cat' re-occures more than 1 time, or even 'c' re-occurs. Like Glove, the surrounding words let it know cat=dog and Glove could later say if dog=cat/horse and zebra=cat/horse then therefore dog=zebra - same as using the algorithm with many senses it can say if sound=image.... Also this helps it predict the next likeliest word "The [cat] was in the [cage]" by using mutli sensory. There's a pattern here, everywhere! The brain is a hierarchy and a heterarchy. Letters make up word parts, those make up words, those make up phrases...and so on. And Glove is a heterarchy, relations base the next higher relations. Lower nodes in a net are re-used. Lower cost=more general intelligence on unseen problems. Until the training on the network reach equilibrium. This hierarchy is a hierarchy of contexts, because the node "thank you" is part of many sentences in the same network. Regeneration works on all information with patterns: knowledge, DNA error correction or specialization or duplication, wound healing, person replacing. You really can remake anyone from the dead, even exactly, and clones. They're all you. And you can enter a simulation or new body. Every day we change at the particle level. No one is a single system really, I am many things and part of a society hierarchy. We fight to live because its instict. Tables don't fight back. However the universe may not exist if its all just different arrangements of particles - it started off random but ended up just in a new non-random state. But that doesn't make anything different. It's just particles, I'm no different than a glass of water. There would have to be an observer of the universe, in a brain, or else there is no one to actually see the universe. But an observer can't move things, only feel it. The universe works by laws and has no need for an observer nor can compute such, only to exist it must have an observer! I think.

Entailment uses translation, segmentation, and summarization, They use each other, too. DNA, brains, and groups repair and replace others and do not fail if 1 node dies, they correct errors. They emerge on their own from physics, and re-generate using information to reach global optima solution. This is physics. Replication of cells, knowledge (plans), nanobots, results in optimal efficient maximal computation, memory, resistance against death on a global scale, etc. A brain has multiple attentions. Which sensory is yelling? Which goal is priority? What topic is trending in the last 2,000 words heard? (topic: physics lol). Attention is used to refine something. But its local attention. Global attention works in the background on things you don't focus on, all topics, fashion, trump, etc. This allows it to discover relations and entailments that may allow it to go deeper and unstump itself and make Wider analogy in a different domain of information to sync up the waves in a hierarchy of nodes to become one standing wave and focus, like a laser beam. Dreams take you on higher attentive adventures. Daydreaming is thinking.

Cycling remains however in a large large system at equalibrium because it is approx. optimal, there would be a frozen world/utopia in the future if it wasn't the case! Like water, the waves settle, but brownwinian motion remains. A planet becomes a star when it gains too much mass and it returns that mass/energy back by blasts by becoming a star, this is local cycling but the global environment is actually changing. The exponential curve is made of smaller such curves - when ex. cars are made better and better faster and faster they eventually reach a halt and require a totally new solution. At the end of evolution is the most advanced creature started from nanobots transforming the Earth and growing eating galaxies, this megastructure is most optimal on a global optima scale - a calculator+brain solves many problems but a brain alone can't work with diverse/big numbers, so you have organs that alone can't solve a problem in the fastest time (the utopia change to a new structure x or y, some take longer, some would mean its death (all metal)) or use the least amount of resources or is most intelligent at evolving fastest or can compress data the most, but globally it is optimally perfect on average. Local optima is an issue also for answering questions, sometimes you have to go back a step on the tree of possibilities.


The human brain can't touch the real world, it only predicts what it sees. When we dream, external factors can tweak a dream. We heavily focus on the face for survival information and mating and think we look amazing or pretty and cool (there is no such thng really), and indeed we can fight off tigers. Because humans simulate / generate plans in their brain or recognize unseen environments using past images etc, and because we are machines that resist death somewhat well, we call ourselves sacred, magical, we have free will, are conscious, because we share information to others and can be 'all-knowing' in our simulator brain and can teach others a grounds to not harm yourself - we feel very very resistful to death or mockery, we love power, we love getting what we want and kknow a lot how to do it.

The Devil seeks to rob, kill, and destroy.
"The thief comes only in order to steal, kill and destroy; I have come so that they may have life, life in its fullest measure. John 10:10 (CJB)"
https://www.biblestudytools.com/john/10-10-compare.html
Many animals and humans as well will take others lives. At the start of evolution competition happens the most. Cooperations entails later. Think of farmers or robbers or lions, they are starving and have no choice but to fight you, or an animal. It's nature. Nanobots may just eat Earth and ignore us. They may lock us in monkey cages and experiment on us. We will try to avoid death but we are not as powerful. Humans have robbed, rapped, enslaved children, shot each other in the army, eaten each other. We have also done the opposite, give gifts to others, typically our related children, but not any one else as often, just like we are selfish for our own lives we are as well for our offspring. It has to be that way, or no one will care as much and focus attention on one thing. Like neural networks, Dropout and random selection helps a net generalize by forcing it to find patterns and fill in blanks, it doesn't need every neuron, it loses some, or some employees, but it constantly updates them by hiring new employees and firing bad ones, or teaches its employees improved ideas or hardware, replacing the old structures. Neurons die each day, humans too, even groups, cities. But the mess/agenda is still here getting better. The model/you is still alive. Immortal. Robbing/replicating/mutating quickly the 'good' ideas (or big identities/cash) is a thing today. Information. Make many babies/ideas. Brainstorm. Sharing is key. Just as animals try to reproduce information and resist chane/death, robbing ideas is similar. We revise/update information as well, including friend connections, and build high level businesses. Propagation of brain waves in hierarchies or heterarchies begin as random domains but begin to move faster in aligned nodes, which makes it efficient to go with the flow. Look up "64 metronomes in perfect synchonization" on Google. They come to equalibrium, like mixing 2 dyes blue & red it becomes purple. Parents teach their kids how to be religious as they are, they will also imitate their body actions. Influence. Kids believe in anything because they have very little data about the real world their body wants to operate in. Things die and are updated and re-used. Giving up and not hoarding your ideas, technology, shelter, food, and husband can speed up evolution (make babies faster). And others can adjust them/the ideas to improve/adapt them and their next generations.

Neurons's connections in a brain start off randomly strengthened domains and compete at first through a cycle of improving each other (after one group in the hierarchy rearranges, the data can better rearrange the other...repeat) but then eventually align (or mix like red/blue dyes or random motion on water) once reach an approx. equilibrium giving little performance gain (like training nets after a certain while) and work together to propagate brain waves by strengthening the correct connections. Like you read a sentence and your brain better knows alternative words and entailing words which allows you to recognize unseen sentences quickly and build on them without having to learn/store new bottom-up hierarchies for every new sentence. A group of brains with more data can compete and win where a group of neurons with more counts can win a spot and not Dropout.

i thnk we started as a big planet black hole
it burst like sun
big bang
then instead of stars forming from planet too large, a black hole forms!
it can heat burst, too big
cant escape!
we resort back to a black hole
where we came from dude
theere was no begining
when planet gets too big it burst as a star - expand......attract.....expand....attract
one big loop
black hole will burst tho
once big enough
not even the laws can contain it
lol
it escape the horizen
us humans want beginings
there aint none maybe


"In 1964, James Lovelock was among a group of scientists who were requested by NASA to make a theoretical life detection system to look for life on Mars during the upcoming space mission. When thinking about this problem, Lovelock wondered “how can we be sure that Martian life, if any, will reveal itself to tests based on Earth’s lifestyle? To Lovelock, the basic question was “What is life, and how should it be recognized?” When speaking about this issue with some of his colleagues at the Jet Propulsion Laboratory, he was asked what he would do to look for life on Mars. To this, Lovelock replied "I’d look for an entropy reduction, since this must be a general characteristic of life."

So what is intelligence? It's just physics. Every day humans and other systems try to avoid death by all means neccesary. They seek food, mates, shelter, cash influence, and avoiding death. We just want to resist change and replicate/spread. We are trying to emerge, repair/grow, and stay at approx. equalibrium with some brownwiniam motion. Or bodies also tries to repair an grow, and die as well so mutations can churn around wounded and less inteligent animals. The brain tries to repair missing data as well. Intelligence is patterns that allow a system to grow and resist change. Humans will do anything and everything they can to avoid mockery and death, they want power, they want to communicate and breed their information genes and survive by finding food. We arn't magic, everything is a machine and are predictable, confusion is not knowing the answer simply. Humans have died before, it's not an abomination. The only laws are physics. Every machine is made of machines, we arn't alive or people, if there was a particle that could observe the universe, then every machine, even each atom, would feel like its experiencing its decisions/laws. But I don't think any machine can account for all levels of itself to be happy, there's so much pain and death involved as well. I don't know if the universe can see itself. You need more particles to explain these extra observer particles. It's weird. We are so good as resisting death, we want to think we are alive and special, not to be even mocked. We believe we want to persist and do things, but there's no real need for us to live or come back. We want to think the universe has a purpose but there may be no life, only laws. We can make clones of you run in a perfect digital simulation in parallel, we can replace you or you data holes in your brain. We model this universe, we know where we came from, but we can't be sure there is not another universe or a God. We want to think everything is made of particles, or is upfront, or happy, but we only know this universe, so if it makes sense, maybe there is something else. We ask then why are we here? If anything is that is possible in this universe is possible and has happened i.e. eating food, death, pain, being rude, such as simulating new physics, why do we have to live here first? Again we think we are going to live somewhere else. As I said, we don't know if there is somethnig else, but if there is, we wouldn't know. It could be we are in a simulation, but this brings us back to our hopeless predictable universe(s) we do know.

So what causes the first cell or nanobot to emerge and then replicate like crazy, suddenly!? Such transformation is fast. It's exponential, then slows down.
And why does wind blow so much outside? That's a lot of change, too. Not 'good' change, but it's still a lot of re-arranging of particles. Death, change.
And what about small meteors and large solid metal planets, I mean they are at equalibrium and are too solid to become interesting or is too small to become interesting.
It seems the only difference, is that the larger death resisting nanobot utopia is better at staying persisting the larger it is, it can eat the small or large rocks, the wind. And it seems the first cell does replicate exponentially faster, until its resources it IS able to find plus able to digest/use run out and faces competitioon, resulting in a exponential curve made of exponential curves, meaning it does become exponential at first, bacteria is everywhere as I said, but for larger systems, over an exponential length of time will exponentially larger systems emerge, meaning over 500 million years, 400, 300, 200, 50, 25, 5, we may see larger systems that persist in their size such as (in inches high): 0.000000001, 0.00000001, 0.0000001, 0.00001, 0.01, 100, 1000000000, 10000000000000000000000000000000. So the singularity did already occur, for small systems, but we have a big planets and many singularities are happening.

Anything can be deemed intelligent, better, cute. But what avoids death in the end is the 'intelligent' ones. As we seen, larger systems emerge later, exponentially faster. Ants are everywhere and do not form a larger high level node government. As we go up the ladder we see human brains and the internet simulate worlds and share data much faster and to almost everyone, better than ants do.

1) The AI we have today can have high accuracy but at only a specific task like number crunching, recognizing an object, or driving a specific vehicle.
2) The AIs can't do Online Learning, nor learn using what they generate.
3) They require many more training examples and compute than humans require to model the world good.
4) They can't abstract/adapt (recognize) data like summarize or expand it or re-arrange the phrase with alternative words that well.
5) They may think using a kitty as a meal solves the hungry children problem but actually goes against goals/ sub goals.

1) More data is always better. More attention is always better (means less compute). If we have a text or vision-only AI we can just write/read to and from humans anything to conduct any task, but to calculate or re-arrange ever first letter of each word it has to use attention and substitute features (translate), entail, and segment. If I'm asked to add an extra  space  between  each  word then I have to pay attention to the start of the sentence, then to the next word, scanning, paying attention to what to pay attention to, then to what to add as well.
2/3/4) We need a better model.
5) It needs to recognize / pay attention to value infection/reflection, we want food/friends, we need kitchens, we need money, we need a property, we need the Earth. It's linked.

#1 AGI is general/robust, it can use its past data to solve unseen problems/questions. You've never witnessed a beaver on your toilet yet you can solve the problem.
#2 AGI is about desired data reflection, translation/entailment/segmentation allow us to abstract a problem to recognize it and adapt the solution to fit the problem. Here we have a clearer view of the morphology of data and of reflecting to unseen questions.
#3 AGI knows whch task to do and which data to look at (goals, senses, keys) by attention technologies. Here we see why reflection/superposistion is possible at all.
If I read a question and I find the answer to it in my brain alone, I may need to talk for hours, visualizing movies, translating, rewinding, summarizing, entailing, translating again, etc. It's nutts!!!! Yet it works.

-------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------