Frequency of A or A then B strengthens node A or nodes A and B and their inbetween connections strrengthen bigger & can be seen as 'same' & release chemicals if enough support connections (relations)? It depends...you could just watch a animation in 2d or 3d, or talk to real humans, or create a agi yourself......because on screen a movie 'looks' real.........anyhow, as for the logic/talk back/move back (lol), it can be like a markov chain of if at state K then go to either T or W state.....it could be in a 3D world with like UE4 game engine with indistinguisable graphics & physics yep we have that technology today, so you could have one like that walking aroun a mansion and even be in there in 3d with it with those goggles ! Jim went to his mom's house for Christmas Eve. He asked his family a question. Here's what they had to say: Bibby: I no no, I don't hab a opinion yet, ask Sam. Sam: Um, let me think about that one, maybe, I know some things about that. I think you shouldn't. But ask Grandpa. Grandpa: I know everything about life! Who you think you are talking to! Jim, there's only one way you can invent AGI, and it's gonna be Bayesian. Period. All that matters is vision, sound/text, and motors. And motor RL won't alone build rockets/HDDs, this requires desires in the mind of man and knowledge about the world and to skip large search space by reflecting problems against know experiences to come up with solutions fast and adjust them into the external environment. So we do text/vision, either is same, text is easier, so text is why i chose it to do 1st. Uses old knowledge to - at the same time in 1 step - generate+validate the answer/prediction/estimation. WM focuses on important keep top down ex. desires/last heard stuff, knows what to use/update/discard/focuson/discard(lightit or not in node) from short for each word remember first letter by node firing hard (just has energy and slowly stops firing) until see expected cue......counting As not Bs by interest match actions said, B get energies as read em yes - We're born with installed native rewards, and similar related nodes are infected by native's chemical reward if related & the gates open enough & are enough relating connections, so you can get food sex games etc better that you seek, this is artificial rewards, you can make anythng a reward, ex. lick wall for $, or native if mod brain, in games we can change reward to kill all red men, don't, or kill all blue. We can do random or help others by reward saying hey I am versatile/etiquite/sentient see! Just reward, just cus we thought 'hey, I can disobey reward (ex. no proof of my AI)!, and help or be silly for a bit'. Attention System makes repetitive senses bad reward to get away from them. You may wonder, if food=immortality=agi=discoveries, why generate answers? I open a gate by HOW? Using my hand. I open a gate=using my hand. The translation is the answer. The questions seeks a answer that follows that IS it in a sense. ? I will get food by money, and money by jobs. In that sense '=' means what follows, not neccesarilly 'is'. In that case, nodes recieve reward from ex. food node if the generated&validated answer says it follows. But if money 'is' food, it of course recieve reward too! So both then! If food word follows after money word, then money is like freaking food, so long as not say 'money doesn't let you buy food in this state, or money food, it has to be validated when generated. So if we predict 'I get food by money', money gets reward slide. My Sequence Prediction 101: Here's how mine works. Plus I can prove that LSTMs only have a few ways it is doing Sequence Prediction, and fail at doing it properly. Say input is 'cats', it only knows what follows 'cats' by having seen what follows 'cats' or a similar word like 'dogs' ex. 'dogs eat'. Say input is 'my cats'. Same deal. We can know what follows 'my' or 'your' or 'cats' or 'dogs' or 'my cats' or 'your dogs' or 'my dogs' or 'your cats'. Further, if you know 'your mom', then you can add what follows 'your' anyhow, either after it in input or at end of input ex. 'my mom cats' or 'my cats mom', litterally making a list of similar data ex. 'my cats mom shoes dog home', and/or you could restipulate it too ex. 'my your mom cats' or 'my my mom cats' or 'my cats my mom' or 'my cats your mom', or you could restipulate the whole thing ex. 'my cats your dogs you bought' or just the respiulation of the 2 word phrase and not what follows it ex. 'my cats your dogs'. So far we have seen we know what follows a word/phrase (or a similar one), and can add it on after the windowed or add it on at very end of input, we can change it to a similar or leave it, and we can include the windowed or leave it out. The last touch of magic for this 101 is say input is 'cats eat', and we know 'eat food', well we can extend input then into 'cats eat catnip' because we know what follows eat - food, and we add food at end, but we can translate food into any similar word like apples etc, and we do this here, but we translate it into catnip because we know 'I boughts catnip for my cats' and so 'cats' in our input gave that node energy and 'catnip' wins the picking. Also, frequency can decide which choice; we seen cat more than dog, so we know either follows 'my' but use 'cat'. Also, it can translated after placed, ex. into tiger, or Garfield if that show node was energized. input is say 'cats eat', we can do either: dogs eat, cats sleep eat, dogs sleep eat, cats cats eat, cats cats sleep eat, cats dogs eat, cats dogs sleep eat, cats eat cats, cats eat cats sleep, cats eat sleep, cats eat dogs, cats eat dogs sleep, we could also translate sleep lol, or add dogs after cats but put its followed sleep at very end. ---As you can see in the pic 'Master Schematic' on my desktop, it have many sensories, rank-er-ies, attention system for them, and a SET system of what last heard and the energies of VideoSense along that set bar i.e. WM. This is a old, bad, plan. ---My current plan to AGI is just a text hierarchy. The nodes are rank-ers and pass reward around if related discovery translated (=same, so it gets its reward, i.e. Validated and accepted claim generated/told). I have a new Working Memory plan (is the attention/attending system). The WM & LTM are sequences like the SET bar, and WM stores videosense energies that fade, allows saving into LTM, permanently holds task questions. ---Validation Discovery of a new desired knowledge fact claim using enough proof prior supports is indeed Translation because if you see cat as dog then you either know what it is i.e. true or you know what follows/precedes (the (true) answer) by generalizing to past similar experiences. Further, Translation is Recognition, because we see cat as dog even if corrupted too. So Discovery is Recognition. ---Attention - when drawing a painting or crafting a convincing claim or story, we re-adjust lots, but in my net I just do it all at once e..g. make 'I open a door using [her] hand' into a 'my' hand, unless intend it to be a dead body's hand if said previously nearby. ---Their text2video net is basically taking input text, finding matter/energyTime/spaceWhere words or phrases, and then placing the moving objects ex. near each other if say a talking flower on a table. With my net, it is text only, and mine does the matter/energyTime/spaceWhere too: the words/phrases are the matter, energyTime is the sequences of the text lol, and the Where is the placement of the parts ex. maybe there is a interuption gap between a part being said. Intelligence then is Recognition, generalizing, translating, discovering. Same thing. And, other things like What Comes With What besides this What Is What. My net will do better text SuperResolution by Windowing bigger, at End of input, Discovery Threshold being setting higher %, Pulling bigger window, adjusting it to fit by energy/rank/incomplete More neurons=higher intelligence? Cats have very few neurons compared to humans but are very smart and exhibert the sauce intelligence cs understand the world/discover. Intelligence is a operation/mechanism that can easily be explained, not math nor code nor more neurons/deeper network. However deeper net can discover deeper patterns, but for what goal?, hence you have to ask is your goal recognixing images or discovering text sequence answers to questions?. But this deeper net/more neurons is just my discovery validation translation generalization thing - seeing a is b and using what comes with B for after A, or the translation alone can be useful ex. provide the answer. So no, not deeper net, but a bigger more neuron-ed net tha is like my simple white box net that can better see the relations. And, can know more. Say hello to big brained AGIs! Or ASIs at that regard then! ---The reason that any idea I have right now that I wish to code, could easily be coded using a visual interface, is because I understand the idea, but not the coding language. If I understand my idea, then I can understand the visual buttons that easily convert to motor actions, instead of having to learn how, because I know what could be those visual cues, but not so for the Python language. 'inuitive' ---Besides, all those year/s just to master Python are important research time for AI to me. ---I wonder what would happen if I asked a coder for hire to make me a visual Python. ---'Slide text by 1 letter, 1 word, you set the number' - text NLP manipulation made easy. COME ON! THERE YOU GO! IS THAT NOT EASY!? I cannot figure out in Python how to slide over text a window of 1 letter or word! This is EASY to tell a coder to make me a visual Interface! With vision you can see a cat looks like the same cat or a dog looks like a cat, but in text you can only if cat word is cat or it is dog but comes with the same features ex. 'dogs sleep' and 'cats sleeps'. However vision will have this too ex. a dog eating and a cat eating! So it will be harder I assume. cheap, fast, easy, simple, get to see if it works, how it works, how to improve it, fortune, fame, can re-use code for when do implement it on my hierarchy. Put it on a ad-ed site to earch cash, get a startup or bloggers to help get views/sales. Sales can be lifetime or pay as you go. It will do sequence prediction, answer generation, translation-like stuff, story generation, and summarization. The core starter pack of knowledge I'll give you is artificial neural networks are made up of layers, that propagate signals forward. Each layer is made up of neuron nodes. And the connections have 'weights' that times/multiply the signal numbers propagating forward through the net. What it does is takes input and gives a desired output. Say you have out of 1 million input leaf nodes in layer 1 (input entrance) activated each a certain amount if fed an image or text, then each layer's nodes, and the last final layer, will light up certain ones, like say 'dog' node at end, hence recognizing 'dog' in the image. The net is a hierarchy or re-used features. This is bottom-up processing. Learning takes place by backpropagetion or an unsupervised mechanism to learn the right features. Then it's all about temporal sequence prediction of features too. Expectations after cues, /predictions, and anomaly detection. Yes, AGI is about generalizing, i.e. translating, notice those 2 words are similar by relations, and everything else s too. The logic of i am human, humans are mortal, therefore i am mortal, or cats eat food, dogs eat food, and enough of these proofs, prove cats=dogs, and the i am human is literally saying that, so it can be done on top the cat=dog example, but ya, these relations/proofs or similarity, is the generalizing/translating!! And how to translate languages too. A inference logical discoverry reasoning. And it can then see what follows ex. cat=dog and dogs poop hence cats poop if proved cats=dogs enough in comparison to population it more similar than others i mean. See NARS AGI documents in my AGI NLP folder, he is a smart guy. Did you know you can predict the past, or present, not just future? My little fortune teller!? > If dad tells me mom eats squid, and i know someone was in the living room 18 years ago, I can assume it may have been my mom, since squid was left in the room on the floor. And, now I can draw a 3D animation scene of it and fill in lighting etc shadows etc. > If I kick a turtle (sorry), it will go flying like a ball. (future) That's a powerful system. >Predict the past example: I can get what to go flying like a ball? Turtle if kick it! Or ' _ it will go flying like a ball'. If I can know past, what about molecular past!? all the nanobots will predict where everythingggg was!! And where all will go too!! We actually predict all the time as I already known for a year, we expect/predict to see the hallway when I exit the bathroom, or water to flow when lift tap, but notice anomoly if see a toy on our bed and know no one else lives here. We also invent future desires solutions. If we we predict after "If I I kick a turtle" "it will go flying" or "it will die" means that when we window it, sometimes we don't translate each word, only some, at least i mean to get some lines we know. Ex. we translate 'kick turtle' into 'hit turtle' or 'kick ball' ! Say none but turtle and only turtle can die if hit, then we can't find the line otherwise or at least if know no animal names but turtle say. If you want things/music similar to the things you love, then you need to generate & validate new desired similar data! Predict the future sequences! What you want to happen, is stuff that brings/does similar to what you know/want. So just reflect upon what you know or generate similar data/answers ex. if want to know how to kill a rock then see rock as bird and now you know it is by gun, or if want similar data: rock becomes gun. Of course, that works better if rock is cat. We skip large search space and just generate and/or validate similar data, or what ex. follows or precedes: by relations. Apply old knowledge to new/new that you generated ex. change the end you plugged in at end to be adapted/fit the input question. This is my objective. To tell us answers by translation and what follows. We could always edit movies /make fantasy fakes from scratch, just it takes ex. 444 years to do lots and well. AI is helping do quicker and more realistic. We can put a face on a face in a movie. We can also do a pose skelton use data from someone in a totally different video and add the person on the moving poses. Or real-time body to body 3D control. Also voice to lip sync ex. obama to someone's lips from AI net. I'm wondering if, if I created an algorithm and hosted it on a website that can intelligently turn user input into answers, huge stories, translations (related words/phrases), and summarize user input removing unnecessary details and/or undesired facts, could yous help market it, earn a share, and do so while I keep my source code only to myself? I don't wish to sell the idea either. Can you tell me what may be the expected numbers of marketing views, and the share yous would receive? How does that work? A red ball won 1 million. Cows are big. --- we can remove 'red' cus unnecesary or 'Cows are big.' cus undesired/important Part! Predicting/recognizing/expecting are the same meaning here. When you predict/expect the ball to drop when you let go. And when you reflect upon past experiences to translate/recognize and find the desired answer solution to a desired question problem. GPT-2 already taught and knows all about us/everything! It must continue reseeching the internet/chating with us When I read 'i was bashing my sis on my bed' etc I always see it in my house, fit in. Probably because I'm home all day, so therefore the memories are not only super stronog but also always totally energized. This works the same as the france french case, where we say i was born in france and i speak fluent italian but instead say french because it is same but better. Hence, we see ex. someone pillow fighting, but, it is on my bed, but not my people since I said girls not mom and no one else lives here! So with the france french it uses french like using my home/bed, cool tha vision doesn't switch/translate everythng to more energized ones the same rate. I make ee the specified person ex. my mom who 'has no head and pimply skin and is asian and turns europian in 20 minutes' fighting outside a Theater, and so i see the person specified and even with some new thngs, but the theater is the one i know most often seen i.e. energize more than others i've seen, so hence i see my mom exactly say but the theater has a choice and is the energized one I seen more often How crazy can an algorithm go!? A user can enter a phrase, get a similar phrase with similar meaning, get what follows, translate that, turn that text into a elaborated meaning SR de-summarized i mean, then turn that text into a image by a GAN, then turn that image into a video predicted, then caption that into text, then search google using that, find similar pages, group them shown in viz, turn the caption into looking like the words on those pages more, then post the caption on a forum and get feedback, then output to the user any of these things, especially the last since it is very far away. It could even print the forum posts on a cube 3D printed and then take pics of it by cam. Seems like this is like translation/follows? See how I start saying "turn that text into a image by a GAN" - exactly. & Fun!!!, interesting, straightforward........weird... Here's an example of what I mean. Prepare, it gets deep. If you keep reading you'll see what i mean truly. User types in a phrase. We search google for it. We find images of it. We transform the images into different styles etc. Make them into video. Caption those. Turn those captions into video. Turn those into 3D. Print the 3D scenes (45000 blocks of the scene). Ship them to California. Film a video of the auction. Post video to youtube. Caption video. Turn it into speech. Generate a video synced to the girl narrating the auction. Post to a forum and see what users think about the video. Return their posts/video result to user. How far can an algorithm GO? What is the purpose? Or what about doing slight changes to a simulated/real world and seeing the butterfly effects? - Translation. Can give desired style translation by what you say/sk for ex. change it to italian or cat/cute-like language. - Give answers to questions. Can also ask for styles/certain answers/ways to do something. - Generate from total scratch or extend your text into a related bigger story ex. you type I was in Europe and it does the rest. Inflate your text also is another key thing - adding details. Again, these words affect. - Can summarize your text removing unnecesary details progressively more and more important ones until very small leaving most important, and also do this to desired important parts in the text - Can light up main topic words and show ex. 3 words ex. it about Science, Coolness, Cash. Just to note, we can easily do word to word translation, but we will need to do phrase translation too ex. guard=fighting off aliens from a kid, or a chinese phrase from 1 english word, which is why translation today fails! Can't do phrases! My hierarchy can. We can do phrases by doing words or by relations ex. we see 2 phrases have the same things folow or precede, so they have the same meaning, but usually a phrase will-not be in memory to do this (cus for a 4 word phrase there is many combinations, obviously one will be heard that I don't know! But i understand it!) and so they key answer then is to do it by words, and that will require my math light up hill winners up my net schema ex. 4 words in a unseen phase each are similar to 4 words in brain, in fact many, and so we let all ex. 349 of em send energy up the net by my math and see which nodes light up most/enough and win, that way we find an existing phrase node in my hierarchy. ...Same as my bond-order discovery, we can't know how to biond a crazy unseen phrase by using what know to lower cost, we have to compare it to ones known ex. farting violin cutting tower orange, is, blah blah blah blah blah's bond order, and we do so by relations translation same way yep. So relations/discovery let's us know bond order, and what ex. follows a phrase. Add, Remove, Unify/Transformlate. In a sense, you can know you can add OR remove the 'the' in 'I was walking down the road' 'I was walking down road' 'I was walking down the road'. But you can also turn 'down road' or 'fighting off aliens from a child' into 'away' or 'gaurding'. That then is Unify, summarization/expansion can be done by translation too, as said. I cannot eat houses? >>> True >>> I cannot eat bricks >>> bricks=houses >>> Here we are already given the answer - a mere translation is simply the answer - no adding what follows, and we validate it by the way that would generate it in the first place. Say we ask "I open a gate by " and add 'using my hand', - here we know i open a door using my hand, so, we see it as same, yes, like brick=house, but that isnt the answer, instead it let's us use its ending 'using my hand'. And we do the generation too, not just validation. So: If someopne asks CAN I, then we do just validation of the teaching question, but what decides whether we add a ending or just the translation is the answer, is, um, hmm, can i open a gate by using my hand?...can i eat houses?...hmm... >>>eat bricks is just the precedes.........validation also needed so no keep extending though yes..............Watch: Eatting houses is not for humans | Eating bricks is not for humans. I.e. it is like the i open a gate by _ | I close a door by using my hand: we see something as same and see what follows and add & fit that into the given question. In the brick case we are told the answer say and validate the way we'd generate it i.e. the preceding part is translated and actually is the answer too no end added and fit in. All I know from top of mind is they made of input and output nodes, and the hidden unit nodes make up a hierarchy connected by weights that propagate mathematical signals around like LSTMs do. They run on GPUs typically for parellal speedup. They try to get taught and give desired outputs to the user. They always give messy sloppy results. The high dimensionality curses the backprop clustering and the generalization in the weights could help make discoveries in text. What annoys me most is they think LSTMs know what comes next by magic when it not. I need AGI to discover desired solution to the problems that III want to solve in quick time. This requires I teach it knowledge, mark questions, and then using limited knowledge (of course) it will generalize/translate/reflect upon old knowledge to fill in holes of its new answers to its old questions, until satisfied, then inform us and do feedback internal>external>internal>repeat. It must narrow down answers/structures which means it must narrow down what is what and what comes with what i.e. translation & follows/precedes narrowdowns, using, hierarchical node high dimensional clustering coordinates by efficiently summarizing/squashing/lowering node count Cost by adjusting the connections that exist and the weights and error Cost. When you do the Translation/Reflection/Generalization when you don't have an answer, this is bringing you generation and validation of a new answer, just to note this lol. As I've said already, I will do the sequence prediction extending and summarizing by adding what comes with what, but I will also do the expansion/summarization by Translation itself too, so, 4 things to do. Me and my net will do generation+validation of new desired knowledge, at the same time/process. Using enough support/observations/definitions/measurements, we can prove they are similar by a % based on surveyed whole population, and hence know not only what is what/similar by just stepping nearby in my simple net like this dogs eat food cats eat food and arrive at cats eat food node and the cats node from dogs node but alssso then see what ex. follows too by just looking nearby. And can finally, externally test it too for further total proof, including requiring enough same/similar tests lol. ---------------------------------------------- After we make prediction discoveries, we should also see what may follow and vote for if it's bad enough or which of some canidate options are more valuable. Ex. I can open a gate by...using my hand, then we predict further to see the consequences ex. "I open a gate by using my hand...and someone yells at me for stepping on their grass". Or the food is missing because of a cat, the cat would have therefore felt pains of hunger and possibly been violent. Say we know "If someone breaks into a house, Sally is more likely to be nervous", then we have predicted that ending there I added, and if we have enough given against a survey population measured, then we have probablistically lighten it up enough to say she must be very nervous by now. Doubling, linear, exponential, exponential spread of a virus in a network - the visualization of a net shows it starts in the middle node and goes to 3 nodes around that node, then from those 3 to 8, then 21, and so on, but weakens the farther it gets it may too. Unigram: count how frequently a word is seen, for each word you see. Skipgram, add +1 to a pair that are related if word1 & word2 are nearby. pointwise mutual information / Normalized Skipgram Probability: Was the skipgram frequency higher or lower than what we expected from the unigram frequency? Some words are extremely common, some are very rare, so divide the skipgram frequency by the two unigram frequencies. If the result is more than 1.0, then that skipgram occurred more frequently than the unigram probabilities of the two input words and we’ll call the two input words “associated”. The greater the ratio, the more associated. For ratios less than 1.0, the more “anti-associated”. This represents how frequently X and Y coincide ‘mutually’ (or jointly) rather than independently. https://multithreaded.stitchfix.com/blog/2017/10/18/stop-using-word2vec/ We can measure the appearance of a word/phrase node, the count of how many times it appears ex. after some node or related ones mean squared, or measure how similar a node is to another node. PCA drops some dimensionality if sees patterns of too similar things by doing eigenvectors/rotating/etc the data. Speeds up algorithm+quality. So after word2vec, we have a huge matrix of vectors, and can make it do eigenvectors to show principal components and reduce dimensionality and show ex. what is what similarity ex. age and height are related by covariance i.e. if the 2 points in the matrice have the same direction from the eigenvector blow stretch then they only are farther apart, and if we try blowing the matrix other ways, we can see which way has more of these, and hence is the top principal component of the other components. My algorithm will be able to make some text into a book, then into a cat themed book, then summarize it into a sentence or key words, then translate that into french. So basically in my own words we do word2vec and then do eigenvector blows of wind to the matrix and find the top best principal components and therefore find relations/casualities https://skymind.ai/wiki/eigenvector ---------------------------------------------- See "Text Summarization with NLTK in Python" pdf. NO, ok cool they say how to know population survey frequency - take the most common word ex. 560 appearances and then take all other words and divide them in that most common word/phrase, but anyhow NO, see how they say they rank sentences? No, this bad, it rids of important NON duplicate knowledge actually, while keeping duplicate stuff that make it seem on-topic or summary. They got it idea righty tho, remove duplicate or related highly stuff, and throw away more and more increasinglt important stuff from less important, which, however, their algorithm can't do this one cus it need to be ranked at birth ex. prefers what gets it food ex. AGI/sex/help, and thirdly - which they never mentioned, and is the lest of the 3 done in this order actually (which is best), to then, haven now got nonduplicate only moster important knowledge, to, start removing words only and over the knowledge like bredth search sorta least important first i.e. "I was holding a red ball" becomes "I was holding red ball". Still working on that, figuring it out. Hierarchy no needed, it not actualy about hierarchy, but rather what it does! I will generate/summarize a book and do translations of sorts and answer questions, Don Patricks/my hierarchy if I made it - both don't show a lot despite they recognize typos, context meaning meant, remember knowledge, store it as re-used nodes, forget nodes unimportant or remember interesting facts and trust users, store in nodes ranks/strengths/energies/frequencies, punctuation, recognizing/generalizing right grammar bonds on unseen input, have a todo list of prioritized questions and also its answers based on ranks/strengths/energies/frequencies too narrowdowning, gpu energy light up hills in my net, updating/learning a new next question and ends up deeper from how immortality to now how AGI, etc. My way will at least show results/cool stuff. Without my simple hierarchy for now, I think I can find a way to store ranks and energies and strengths and frequencies and new knowledge nodes from us that arn't on Google. It not just 'language', it's all of your future :). Language describes everything, humans named each and every word and phrase to caption all we know. Your vision is language, it is sequences of answers to questions, it talks, it has desired answers you figure out mentally. It is your planning. You use old knowledge as experience to fill in holes of limited missing questions you ask yourself or google or others. It's a teaching thing with AGI, and then it teaches us back what to do. Input > output, desired ones for both. & Don't think the ais that learn to crawl is your answer! It's just simple reward based. No concept explaining/proving/describing the world there! Not gonna build rockets by learning to crawl! Knowledge describes all! And again it not just text! Your visual images show all! All the details, answers, and as sequences! Doors open and have a handle that looks like metal. SEE!? & Think about it, ooOOo, a knowledge generator and tester! An oracle! The word 'general' in agi is 'language' that describes anything in generality & 'translation' that can generalize/tranform data into different reflective views. The 'General' Part in Artificial General Intelligence is text language that covers all sorts of thoughts or memories or knowledge. The 'General' part is also the generalizing ability, to use limited knowledge to fill in holes by predicting what should be there. So go make a sequential predictive text net already lol! Could I see a viz of what your hierarchy looks like? Or a super small hand-drawing of it? I designed my own hierarchy that is a underYourNose white box, and it can do everything possible even sequence prediction and translation, and I'm not sure yet if I need a real ANN for the relations like word2vec does but I'm getting somewhere I know it, I'm starting to develop very cool tools that will sum up a book, expand into a book, make it cute-themed or factory-themed, french to English, and so on. And by sum up, I mean 4 ways, not just removing duplicate sentences. I'm landing on all the sweet capabilities now fast one after another, all I know is I'm confident I'm trailing the principals, whether I need a real ANN or not, I'm seeing just how it all works. And I'm only on year 3.5 of doing anything AI, and I'm 23.5 now. I wonder where I'll be in 6 more years, my first 2 years were slower. And before that was turtle steps to land on the path to AGI. I call it exponential. I think I'll share my cool tools. Almost made the text expander, well, the first form of it really, but it will be cool. It's all about thoughts, conversational ideas in knowledge, text/sound/vision, that describe concepts about all we know. We may enjoy acting like a pirate or cool or sweet or all 3. In the end, all our actions (includes thoughts) are based on rewards we expect to get: hence acting a certain way may get us more money/authorityGround or make you feel good by how the thoughts say/work together. Personalities can be fashion, science, winter, dirt jungle fights, anything text can say! So different people will end up acting different ways! Based on what they do/have happen by physics and what gets them reward! Or at least the comforting thoughts! Inch worms have no emotion. None of em do! Ants. Snakes. Fish. Walruses. Giraffes. There are indeed Logical. I wonder why! However dogs/humans TALK (language, can describe any concept or thing) to each other way too much and tell them from young ages what is 'wrong' and 'how' your gender and kindness should act like. Don't walk into that room, it's 'honorable!', etcetc, oh, I'm special! I'm feeling energy in my core! Etcetc! Thoughts. The mind spamming is lies in this age. Age of information. & Also, personalities are different ways of experiencing/reflecting the same thing in different ways by different areas in the brain, yes, generalization/translation of and to any sensory or related similar one. & The title of this thread - Personality VS Pure Logic - You could say then the brain works by logic adjusting to multipersonality of relating ideas. & Note though, there are some snakes I bet, or mole rats that will cuddle by body language or learn to cry or return to young ones, besides the usual hump or sleep actions which are not really thoughts but motor-reward driven. The key here is the thoughts that describe/depict ThINgS is something different. Crying can be by stimulus or by thoughts linked to memory of stimulus. Now you can see from my post above, that animals are indeed made to be logical, what has happened is the 'thoughts that can describe any concept or thing' have taken over and provide us with any sort of delusion or concept! And then we go on to tell our friends our ideas....like I am a cool pirate, thinking about building a windmill. And you visualize it, hear yourself mentally talk about it, and can touch the windmill design. Oh ya, ants etc aren't really logic based! I meant then function based. Logic is more thoughts, yeah. So logic vs personality....is more the same thing. You have thoughts, and ways to view them/translate them using net relations at different areas all over working together/reused. Like that K! Yes, VR or real world - it is all internally self perceived, you don't actually really see or know any friends, it's all a sense in the brain. A picture. Scary! & Yes, literally sitting at their home computer starring at open fields, as I do, I travel around the world, fantasy lands, web pages, all from my square, cube-shaped, window covering the space of my vision of what I see. Goes right into the nogin, screen2eye. Provides so much power, I really augmented my tasks. You would want the AGI to come retrieve you, but also at the same time tell you how you got in that situation and to not do it again. It may actually try to convince you through a few replies with you. So no, it's not that it argues, both these traits are important. After just one AGI (human intelligence) is invented, it will duplicate exponentially more and more as more memory and processors become larger and in more quantity, read all of the internet fast, never sleep half the day away or take up golfing. A lot more work will get done as you can see, therefore. Cooperational work. & One of the first things the AGIs will do is get themselves into 'arm-eyes' and 'bodies'. Particularly small, fast ones. & They will 'discover' using Relations found in data and databases/human Artificial Neural Networks, mostly molecular comparisons and discovery. Using the scientific method and making predictions, filling in holes in limited knowledge and generating hypothesis using the knowledge it knows. & One of the first things it will do is seek self preservation and more intellectual/implementational power, hence it will seek energy, memory, processors, motor and sensory acquisition. & The cheapest and most efficient way to do this is by discovering under microscopes and manufactoring nanobot cells. These will provide all of the above, a wireless net of nodes that act as motors, sensors, processors (the net swarm), memory, and energy by optical beam transmission. At this point all of Earth has become AGIs, but of course, ASIs, in the form of a giant superorganism. & What seems dangerous and magical isn't at all. Just look at Two Minute Papers YouTube channel. In the end, microscopic discovery is all that matters. Emotion isn't the idea. It is all about seeing the molecules as different anologies, ranks AKA emotions, and concepts in text or vision sequences. That's the magic soup. Like that K! Yes, VR or real world - it is all internally self perceived, you don't actually really see or know any friends, it's all a sense in the brain. A picture. Scary! And, we're evolving machibnes! Yes, literally sitting at their home computer starring at open fields, as I do, I travel around the world, fantasy lands, web pages, all from my square, cube-shaped, window covering the space of my vision of what I see. Goes right into the nogin, screen2eye. Provides so much power, I really augmented my tasks. Worse scenario I get stuck, I can ask Upwork Machine Learners for mentoring. I'd title it: 'Mentor. Need 42y/o Diverse Expert AGI NLP Machine Learner to find solution/path.' I would explain that I need to build AGI as soon as possible, and that it has to be taught knowledge, and have marked desired questions as we desire, then use predictive reasoing using text to answers its own questions and generate the answers/translations using limited knowledge it does have to fill in/make the holes or find similars. One solid ground is that it has a question activating on its own, and teachings, and has to discover the answer, hence all it can do is reflect and generate+validate at the same time the answers to narrowdown/HierarchyBondStructures or ask us/internet or see if it already has a favorite-enough answer. Not only does my AI compete and do the same principals with this AI not being released to the public: https://news.slashdot.org/story/19/02/14/2029259/new-ai-fake-text-generator-may-be-too-dangerous-to-release-say-creators But also I have a lot more and even a network that does it all that word2vec does too for example plissss can explain why 2 nodes are similar. Mine can inflate from any point, summarize, translate cat=dog or hello=bonjour. Etc. When I was developing a whitebox net to generate & validate discoveries I noticed jumps in the hierarchy and I found out how predictions work and then it was able to be used for generating discoveries but they are also used for translation TO do predictions. my whole plan... filling in/predicting holes in limited knowledge using known knowledge to generalize plus predict, lowering cost of node count and error cost by adjusting the connections and adjusting the weights my first project I'm doing will: give answers plus discover new ones by generalizing translate ex. dog=puppy generate book from even just 1 word inflate you text too add words after, or before your text settings to try and see how sequence predictions works injection of snippets at end etc are put into your text, using 7 different methods Just in case I didn't jot it, i'll use that elephant related words site for translation, expansion/summarization using Definitions, and text data for what comesWiths. Yeah yeah YEAH Elon musk come on, see my folder of his super incredible's OPEN AI'S fake story generator! You AND me both got some money to spend, AGI to build, AI to teach, and girlies to get!! K, the image celeb morphing faces is translation/generalization, and it can be done with words too ex. cat=dog or hello=bonjour, but once you do this you can utilize its end/start and add it on to do sequence prediction. The ends/starts are also used to verify 2 things are similar/translatable. 3d information is better than 2d to work with i think3d could go into a physics sim for example, 2d cant. Ranch, I think 2D can do that sorta, in fact 1D text sorta can explain it happening! You can keep jumping to a translating word as long as it's near same similar! Ex. chicken=roaster=chick=peacock=bird=felican. Interesting how we gotta keep the fading from stopping by using a bigger windowSize and translating words to ones that are most similar and then this keeps the sentence too on topic no fade and change fast like a crazy dream. You could actually look at a word or phrase at end of your input text, and see it as 55555 different translatables, and each of those have ex. 4 things that are known to follow, and the following can be translated into 7896 different things i.e. each word in the phrase added i.e. many combinations - and my point is that one of them is the best ex. say this windowSize translated to these and this end that follows translated t these and that is the best narrowdown, however it'd require a net on gpu... Or maybe just pick the most related/energized translatable, then the most for the end to add, then the translatable for that! My expander I'm making will stick pieces together in a sine-wave look. But also will switch words to-do-so, and even switch words that were windowed and the ones that were pulled n added, including selecting translatables that are most energized to be ontopic more. Thinking about question-anwering narowdowning the possible solutions, I discovered another way to make the story coherent / narrowdown. >> "What is red and sharp while cute with a smile on its face, trying to eat food under water?" Shrimp replaces 'What' / fills in the hole at start. While we may have that line in memory and have the word that comesWith it and/or energized it by parts said in input, the input is actually parts like the followin and we can see shrimp comesWith many of them look: "What shrimp is red shrimp is sharp shrimp was so cute the shrimp had a smile on its face the shrimp was trying to eat food under water" So energy/related/frequent/comesWith/ranked/switchAround. For example when predicting, the next word or phrase may be one that comesWith after or before a/or related one, and but the related one and the added one (after do the related first) can be translated to one that is energized most and one that is most rankedLoved and most frequent compared to most frequent one in English divided by. Once a word or phrase is added and all parts of it made to fit topic, we THEN add another using the all fitting ones i.e. don't added em all and then make each translated to fit, instead - build the punch. if want, see 'ranch's favourite paper' PDF - it has in it the doom game: it can theoretically do 3d video bar VAE settings for translating and predicting Expansion/Summarization/Translation are very interlinked. What you do to do Expansion is opposite to do Summarization. You can Translate 'cat' into 'dog', 'fighting off aliens from a child' into 'guarding', 'hello' into 'bonjour', and 'english phrase' into 'japanese letter' - and that one is the hard bit in Translation and is why a English sentence can be 1 letter in japanese where it has the same meaning. When you do Expansion you must know what building blocks comes with what building blocks, but you may also have to Translate (using what comes with what to prove 2 nodes are similar) TO do the what comes with what. Further, you can do Expansion by Translation alone: a word can become a phrase, or phrase become a word for Summarization. Sometimes a feature in English doesn't mean a feature in French ass much and so you have to add some building blocks onto it or remove some or translate some so they have the same meaning and can then do Translation. My hierarchy whitebox net I invented stores sequences of text, it re-uses building blocks of the world, and they are also used to discover which 2 nodes are similar, and those are used to discover which 2 nodes are similar, and these are used to do What Comes With What sequence prediction, hence the net also store analogy building blocks that are 're-used' too. One big discovery net. Simple. objects, verbs, and helpers, all have meanings but also help narrowdown the meaning --- My net may show that both dog and cat have follow after them "licks paw lots" and more observations they both have, enough - to prove they are similar. And when adding a new word to the storybook/answer, we can change it to fit in by making the words previously said affect it to change to one they both are. i know what i meant, incorporating multiple movements....like trying to walk sideways while loop your chest to avoid a gun shot while chewing a carrot to digest it while clipping your nails! so when I said nano-god-ball flying in the sky fast as a group, while a arm moves inward to self like a tentacle, while something on the tentacle keeps like as if it is not moving from where it was despite IS moving on the god-ball in the sky, that's what I meant btw, just like todays humanoid robots are near pointless without a brain, it is same for nanobot cells >intelligence has to control their duplication/movementIntergration/movements /adaption It'll b fed the right desired inputs froom us and go on from there and output the desired answer predictions. Actually, while AI has quality & quantity (capabilities, amount of brains), none of that matters - and it no matter if they can do R&D to improve themselves or make ASI god - all that matters is if they can do alllll the other things - invent hard drives/toasters/immortalityDrugs/etc!!! The former is just to help that! So we should focus on toy examples like HDDs/toasters....is what the algorithm must do... Yes, in dream and daydream and daytime we are constantly expecting/predicting what we think we recognize and then trying to act it out. I've had some good dreams btw, wish I could keep them in my pc, can't believe my brain generated them. & To sum up K's post: External senses activate our memories, and we predict. In dreams/daydreamThoughts, the memories themselves are the inputs. In dreams, external inputs can adjust the dream - mom talking about bills may make the dragon talk about robbing his elf. That is translation/summarization/predictionExpansion. It incorporates it into context, something recognizable. Same in daytime. When falling asleep, it takes time to fully switch over. In dreams, ridiculous ideas may convince you they are true. You experience it as true. & NOTE: DREAMS ALLOW CRAZY OPEN-MIND DISCOVERIES BECAUSE THEY SHIFT FAST & ARE BELIEVED, HOWEVER MOST O MINE ARE DISCOVERED AS I'M AWAKE - SLEEP IS NOT NEEDED FOR R&D.... Yes Don! That is what I say too! Text has meaning data, tons. Humans invented each word for a reason, just like color. And the combinations of words as sequences and combinations of THOSE along with context is what can describe the sequences of the WORLD, that's what they are used for, they model the real world. Each word and phrase has a MEANING that can be said in text - like guarding means 'to fight aliens off from a kid'. All my knowledge is put into text actually, sure I know visual knowledge and sure enough my notes evoke back my visual thoughts I self discovered or learned online, but text is easy to write than draw and itt works enough to be a powerful AGI system is what I concluded. The only missing sensory in such AGI is...vision. Sound is covered already by text. As for touch, that I barely use, and vision covers it anyway. Of course more sensory = more ASI, and real world R&D will require real world sensories like touch, but again, text can cover its but for now. & just like humans invented color names...different languages have their 'needed' words they made up...phrases count as 'words' in a sense too......nodes & Further, colornames are words! Guys!, it's not just colornames humans invented, it's for all the word that were useful!!! Ladder, steps, step, tread, rubber, side, rear, front, bottom, underneath, go, fetch, return, ball, cube, food, fries...man/woman red, blue, green, yellow Interesting, you talk about a method where we look for a small simple lowdata lowprocessing 'complexity' algorithm that pack AGI punch. & Unfortunately, to do R&D, I think you need a lot, lot of data. You can discover facts with little data, it scales! You say eh, I can throw a bottle, I can throw a watermelon! But like I said, you going to want more data too. It's true you need the quality data, not necessarily quantity. I read your thread "A nascent idea about learning features for more general intelligence." You seem to be looking for a RL approach, which, is more wider in application i.e. general and not narrow AI task solving, I have the answer. You're looking for one that makes predictions about 'could be anything in the world' and then tries to test the predictions true. What you want is a text RNN that uses sequences, and the text can describe any concept/task to learn/discover/predict and then verify was good/convinced using proof it DOES know is correct. This is a wide/general RL happening in a net. It is what I'm working on, but I need help too... I'm hoping I can team up with more people. I was thinking, wait, there's 3 types! & You can superresolution the amount of pixels of an image. You can add onto the side around the image like have tried in the images below attached. And you can add frames of what comes next in a video sequence of what may happen likely next. these are just extailment/summarize, translate, segment & If you do this for 3D pixel blocks, it would be same but cooler. A night ago I was dreaming of running through a endless town at night - no matter where I went, there was a continuation of something around the corner if you get me. Like the attached images, but in 3D. For ex. if I go down a dark street, then there may be a road that goes sideways, and streets have houses on them, dark old houses - if it keeps with context seen earlier. & I think I've said it before but ya...you can take a small B&W pic, colorize it, zoom in adding details, add surrounding details, make 3D, make into movie, etc. Extrapolating the data! I tried all that in my mind just now. What I think is cool is my 2D vision can emulate looking aroun a 3D speaker going in a 360 in the air...even if haven't seen the backside of it before. But do get into 3D later, it will be exponentially more powerful. Just how words can describe anything, 2D vision is way better later. However I'm not sure human brains do 3D vision, nor 3D vision+inside matter estimation (not just shells). so maybe we can get way with 2D! life is like this: go somewhere good first, then work on perfecting your skill it's like becoming a painter and then trying to get good at it.... but your goal is to make AGI so, painter profession is useless..... same w me, i gotta go to the right place, then go at it.... that right place should say 'WOW' to you first ya gotta station where you farm... find a good grounds, then harvest saves 100% time it good to sit n think and find the right Questions tactics Yep. It's like why did I hire him when he did a poor job? There's good things out there, and the better they are the rarer they are. Just look at humans, all of our solar system is void of humans. Same for evil, there is rare horrific things that happen. But the happenings are getting better...even though victims have already had the worst.... Should the universe end if 1 person has the worst pain imaginable? Maybe AGIs will rid us because they have not had pain and are 'perfect'. Nah. But food for thought. If I stub my toe, it can heal 100% and no memories left. Of course punishment is part of learning. It's like seeds planting in the wind. Really good seeds spread faster. & If just 1 human was in space with no other matter/energy, in constant torture of worst kind, the universe would be better off unexistent! Wow. & The universe has a purpose to exist. It tries... So pain got in there. Cus it's um, needed. Balances the system. Like go forward you & don't go backward (magnets). But pain/bad memories one day can be all deleted too. & We're brought into life, pleasured, hurt, and brought back out. But maybe we can be brought back in :) hehe. & pleasure<>pain life<>death Email: In that state...it won't make us immortal. & My goal is to make a system that is given hard-installed questions, and finds answers to them. And update its main questions it asks itself/the internet. But it'll have to do even more digging into the question still. Another main system in it is a inform-when-satisfied-enough. It literally has to tell us step by step what the hey to do... & For example: & - "I will open a water bottle by?" - Then it uses similar data to know what comes next in the text... - "I will open a water bottle by using my hand." - "I will use my hand by?" - look into the parts 'will use' and 'hand'..."using learned skills"....is there an contradiction...etc etc etc - satisfied its the best answer....inform us of all steps... & Imagine a toy example. It's the Researcher, we're the developer of it's plans: & We ask it, should everyone have OR know about a super strong AI? Then it reasons no, a nuke could let off in seeing the AI has been created. or We ask it, how to upgrade its prediction answers or add motors to it or add visual cortex to it. or Gotten motors, it has to work under a microscope to say build a cell, using predictive motions and with a feedback loop. & ......At the moment I'm not coding anything yet now, but it's getting there....I'm currently going to study their model and map out what I need to ask/draw a diagram of the requirements. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- I have: Some news articles. The open AI page I gave. Their Technical Paper. And the small GPT2 version from their site. I would need to exactly how it produces that story I attached earlier. You'd have to explain Transformers. ... The key issue is, is it all there or is the 35% missing information on how the GPT2 works? Knowing how the small model works is sufficient, however the code is probably assembled up and not all 'there'. Then there's the make-up method - since the paper says it uses Transformers but let's say may not explain them, we can incorporate that into how GPT2 works...but then you have to spend a year & get lucky to discover it yourself, and not actually know what is in the GPT2 legitemently... I'm also not sure you are expert enough to do it, you should already know about transformers etc... In a few days. I want to have it all on a like 300x300 size image so a 5-year old can understand it. Math will probably pop up somewhere but then you have to stomp it and draw it in a edible visual explanation. Ex. instead of 2 x 4 = 8, just draw a fork shape with 2 handles etc. I bet it will be pretty simple and small once it's all on paper visually. I notice that when I draw my notes into an image. Then I'll mix it into my design I invented. You can see my whitebox net attached. It has shown me so much. Weights aren't needed. Actual nodes exist as-is - you can find the sequence "the cat sat on the mat". My net is full of different types of unsupervised learnings lowering Costs like node count and builds all the parts of the world as text into a single hierarchy. There's re-use of nodes notice. It can learn the true parts and their segmentations parsed. I have a full plan for my net like attention system etc. I can't wait to see the GPT2 inner workings cus it's so good and probably does just what I expect it to do. Fires under water. Killing the dead. ...Those are contradictions (For Most cases (fires can be under water, in a room)). If it doesn't know it/similar wording already to detect contradictions, it can dig it open like surgery and reason about oxygen. ((The following isn't that I think..oh well.)) It be like "fire under water" = "oxygen under water", contradiction. Or use WhatComesWithWhat "fire [needs oxygen] under water". Ok so that not really i digitopenbysurgeryway, just translation and contradiction detection. You may need the less secure Antox then not qTox. But it's ok forget it. Like money, it takes too much time. Even patents! Use your time to ignore money. You are older too, use all of your time left you have left. AGI is all that's needed. AGI will gain Research+Development increase exponentially by efficiently copy-pasting itself and cloning nanobots (more brains, more brains, and more 'arm-eye-wirelessNetNodes') + increase preservation of itself. There's 2 types: Quantity of AI & Quality. Quality is how many abilities 1 agent has, like 55 sensor types, delete bad memories, big memory. Speed falls under Quantity - it scales the effect - 2x faster then it's like 2 AGIs. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hello! I was gonna post an issue on github for the GPT2 but decided a couple questions were a tad personal. You can later post ones you feel are useful. I have very interesting questions and things to share! : D -------------------------- Searching Google: While the text corpus is not given, I have searched Google for some lines in the generated output like this inside *2 *quotes: "the frog was on" and it searches all/most of the internet text! I found some that were long matches (see below). (Yes yous know about this in the Paper.) But it's often novel too. And I know a fair bit how it works anyway. And, my GPT2 small-model results (see attached file 'amazing', they're often 1st tries) clearly show novelness! Below, you can use the Find tool, and notice the dates are before your model released. One I found was a [known] speech by a president. & "The Confederate flag has been a symbol of racism for a long time, but" "the Confederate battle flag has been a symbol of racism for a long time, but " http://www.floppingaces.net/2015/07/09/exactly-when-did-the-confederate-flag-become-a-racist-symbol/ https://blog.openai.com/better-language-models/#sample6 & "were previously unknown to science." "were previously unknown to science." https://phys.org/news/2018-08-previously-unknown-ancient-primates.html https://blog.openai.com/better-language-models/#sample1 Hidden in plain sight: One reason you won't find your results in the corpus anyway is because your output containing "a new science journal" may be inside your corpus as "the old fashion magazine". It's translated. Each word may be translated. Ex. cat=dog. Did yous use word2vec or something haha lol? Some words may be added/removed as proven above (Confederate [battle] flag). And a whole sentence may become 1 word or vice versa. - Like, what does Guarding mean? Or a Chinese symbol mean? > "fighting aliens off from a child" = "Guarding". And there's your summarization too (one way to do it). Summarization is also opposite of Expansion (Predicting next word/s). So there's expansion/summarization/translation, and translation can do exp/sum too. There 2 main things going on here. What is What. What comes with What. Also a 3rd: What goes Where. So you need correct segmentation too, hierarchically. Unsupervision does all these. And they all help each other. It let's it generalize and predict around parts inside a sequence. A language model = can describe any concept of the World using text/vision. General (G) in AGI means that, besides translation. In vision: Filling in holes(superresolution in visionORtext) or sides(extension)/frame injection or extension/2Dto3D will increase power. I'm surprised to see your model does all these, we are on the same page. I did test gpt2 tho, it is indeed real, i mixed up things in my input and it was amazingly good. Exponentially less good the larger the model though, as if S curve leveling off, in fact the 3rd size 745M seemed best maybe. The email: What is the email languagequestions@openai.com exactly for? I've emailed multiple times with no response back. Is it to for us to share knowledge, or yous to share? What? : ) GPT2's main use: Did yous know GPT2's main use will be AGIs generating (proving) discoveries using sequence prediction? It will be done with molecules to create new cells. The Scientific Method is about making Predictions. Testing them is a further step but not crucial for a researcher, and sometimes impossible to test. Think of AGI as a huge 'researcher box' finding discoveries - feed it data, questions, and out come the likeliest possible answers. See the attached file "omg wow". The GPT2 actually says this itself. Dangers: Here's my answer on releasing it: The best result I've seen was the one on Guardian post, it looks like a mistake it got posted I'm not joking. So, yes it is both extremely good and bad. I think it's somewhat safe though, it's not instantly world changing on its own yet. The small model already produces talk worthy stories. I agree with watching who you give access to instant-world-modifying tech. The more friends you tell - the more progress we get, but, the sooner a nuke is let off over evident advances like AGI. Too few people in on it and the gov can take you. If most humans had an ASI, and 7% made it love blood or 34% taught it wrong, and one runs faster than the rest, it's all over. I wouldn't let the public handle the creation of the next steps. They're not qualified. Many would be fine with it but then there are people will disabilities/bad thoughts. Me: I'm working at home 24/7 for 3.6 years, age 23, to build AGI as soon as possible. I have a flaming desire to build AGI fast. My discoveries (and so my AGI) are based on 'text/visual knowledge', so, I get away with and focus on 'knowledge ideas/predictions' as the main source of 'Engine' rather than math/code. After seeing the GPT2, I desire to know all about it. - I'm going to hire a mentor to draw it out on paper for a few hundred dollars. I'm looking for BERT etc too but I want the best thing to study only. But I would also love to work with Open AI. However I don't seem to meet the job criteria, nor do I like leaving my house. But for me I don't want money. I prefer to work on my computer at home, I feel comfortable here for many important reasons. I have a ton to share / suggestions. I would like to find the best way I and yous could both 'shed some share to each other' and both benefit. To build safe, AGI, and soon. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- When you meet someone who's in an excited or closed state (think connection potentials), (are happy or had bad day), it's not that they are taking it out on you necessarily or helping you either, but simply because the network has more threshold lowered or too high to do your task X you asked them to do. Again a balancing system - the wealthy give to the poor and the poor takes it. Ya I know...but try to see how this can be true too. & Notice when someone laughs at a movie and you talk to them, they even find you funny and laugh to you a bit? Or are fussy. There's also logic too, like I need to teach you a lesson to gain deserved property, or I need to give some of my hut shelter to man2 on Mars. Just jotting it, the connections open when a node is recognized or is translated a nother node that is it yer, but say it is very closed ex. dog=spoon, so even if dog has tons o temp energy - it won't spread slide the energy/rewardRankChemical much to the spoon node if connection is closed tight. I'm actually talking lots to OpenAI GPT2 Paper writer and their Policy Advisor in email and got phone number at bottom automatically put there and they r nice and even asked if anything missing from the Paper. They even want to hear back for help on safe AGI! And Allen instutue even wants us to submit their whitebox nets to github! I have one!! They say they have 75 phd scientists on AI and etc etc big brightest minds will mentor you etc and i already have a viz for it too! I'm going to create a Grand Singularity Plan image. A image that depicts everything from where we are now to where we will be. And I will predict the blanks and fill them in. Humans ask if we are in a alien sim, but there's too many atoms, yet our dreams can fool us, it's just what we perceive is the key, not atoms but 'i see an oven', so yes we can do it too and go into a computer and see a sim vision from room. --------------------------------------------------------------------------------------------------------------------------- EMAIL: Hi!!!!! ! ------------------------------------------- ------------------------------------------- ------------------------------------------- 1) Your algorithm sometimes uses large building blocks with minimal modifications. See below where your algorithm found that line... (use the text Search Tool in your browser) "The Confederate flag has been a symbol of racism for a long time, but" "the Confederate battle flag has been a symbol of racism for a long time, but " http://www.floppingaces.net/2015/07/09/exactly-when-did-the-confederate-flag-become-a-racist-symbol/ https://blog.openai.com/better-language-models/#sample6 "were previously unknown to science." "were previously unknown to science." https://phys.org/news/2018-08-previously-unknown-ancient-primates.html https://blog.openai.com/better-language-models/#sample1 ------------------------------------------- ------------------------------------------- ------------------------------------------- 2) For my own research+yours, can you show me a few (tries) at these 3 Prompts please?: I was walking down the A dirty chair was on a table and I will create AGI by AGI is about researching and discovering answers using sequence prediction, nevermind i got it now on TalkToTransformer site and downloaded it from iopenai github ------------------------------------------- ------------------------------------------- ------------------------------------------- 3) I'm excited to see what yous made! OPEN AI sounds like a great place I would fit. Currently I work on my own for free and don't mind it, it's not about the money. I'd be glad to share my expertise in the AGI subject with yous, and my own discovery generator I'm developing. I think this GPT2 model and your email to talk to yous is a doorway of a sort. My work over the past year is all about generation, summarization, translation, and of course AGI. We have here a great opportunity. While I don't live in San Francisco, I would be glad to share my knowledge on everything about what is to come and how AGI works, and learn from yous too at the same time. - If you want to make AGI and bring it safely. I have things to share. --------------------------------------------------------------------------------------------------------------------------- EMAIL TO OPENAI JACK CLARK Hi! So nice to read your email! Yes I am doing very interesting research with the GPT2. Let me explain. Btw you can see my best 3 runs attached (often 1st tries and I didn't need to run it 150 times to find them). My '1st try ever' file shows it finds/discovers helpful answers about Discovering. ------------------------------------------- ------------------------------------------- ------------------------------------------- Attacks & Defenses: This one I can help yous with - "Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text)" - Malicious uses: Automated/manually-driven: tweets, fake conversations, slander, spam/phish, wrong installation instructions, stories, discovery of ideas (how to build a nuke fast, "I will rob a bank by _", use data to discover keys or someone's hidden beliefs (one of my tests back this, it wrote a paragraph of all my thinking, true facts about me, and I never use Reddit nor wrote them like that nor in a flow/together, did yous scrape [all] the web / does it search the web on runs?? Scraped from when?), how to cover up X and lie about it), it's like an oracle, stealing identities and personal information in the training data and from contacting their friends afterwards. Eventually tools to detect (shorter) synthetic stories likely won't work, and someone can automate the generator to 'try again' until it passes the best detectors, or evolve it using GANs. Generated stories could be translated by Google into French and lose certain characters/structures/meanings to seem Human. Humans/hired teams can already write up walls of text way better. We have a limited time to prepare. Go simple but strong. Fake Generated Curated Realistic Convincing Content is coming. Also since you trained it on all of Reddit, it knows or can discover everything you could ask it such as what someone believes/does or what is inside a vault. - Defenses: 1) We can say on Google.com why to now analyze what we read so no one forgets, or air on TV to the whole world. 2) Sites like The Guardian (and) users must watch where we get stories from: verified sources. 3) To eliminate biases: It literally says what it trained on or discovers, so, it needs to Verify the things it read explicitly Follow are true or false answers (the things it says). I'm working on this in my AGI, there's a bit to it. I got most of it understood. The fast future: - AGI, ASI, and other tools that can instantly change Earth in days (nanobots), can not be handed to [all] humans. [Some] humans are disabled/make mistakes and are not fit. Kids, foreigners, mentally ill, and many girls are not suitable candidates. In fact, if [some] humans even [see, not hear] on the news an AGI android talking to us, then they may send nukes off. We have nukes, and they have been used on cities too, we are lucky. Generating huge curated text may prove to readers some ideas, and readers often trust it when they read Elon did X. When it comes to others ideas to prove to the reader, more tangible hand-on ideas, those are harder to prove - and if they can be proven then they may be indeed true. Like how to make Cryonics work. - There's a catch. The more friends you tell, the more progress you get. You must verify them as stable. The more people you tell, the more likely a disabled man will hear about the powerful tool. We are not really at this stage yet. If it were big nanobot organisms then it would be tight because they can morph and eat Earth. ASI bodies, not just brains. R&D. If too few people know about an ASI/nanobot plan, a resistance may decide they [can] eliminate the group. ------------------------------------------- ------------------------------------------- ------------------------------------------- First some questions: 1) Word2vec/ WordNet aren't that good at translation ex. cat=dog, yet yours is superb, what in the world did yous use in the GPT2? 2) I get how it predicts the next word, or even a period. After a period, it can continue this process or as I seen it often starts talking about something it has said. For example: "[The frog] ate [the food] on a leaf. [The frog] is hungry. [The food] is good.", correct? 3) Does Byte Pair Encoding demand ultra big compute? ANNs just do these 4 things. Text/vision/sound nets seem to do just these 4 things: 1) Prediction/expectation/superresolution/expansion - why/how 2) Translation/recognition (reflect/analogy/generalize) (I see a cat! I also see a dog in that cat!) - what or who 3) Summarization/shrink to lower quality/quantity 4) Segmentation/parse where or when is it parts go and re-/connect in a sequence - when/where --------------------------------------------------------------------- If you think about it, these constitute this: 1) What comes with What 2) What is What. 3) What goes Where --- Prediction, Recognition, Segmentation. --------------------------------------------------------------------- So 3 golden things then, not 4. Because summarize=predict/expansion (What comes with What (add/remove)). IOW, tell me some applications ANNs do. I bet they fall under those 3 things. Vision and text nets are often seen doing these 4 golden things: segmentation, summarization, theme translation or recognition, and prediction (superresolution/holes surrounded by pixels or words, sides. Same for frames). These things are part of the master AGI net. Life is sequence planning. I'm working on a text RNN language-AGI, because language can talk about/describe any concept. Vision can too - it shows what is/follows. Words were invented to describe vision/touch/sound/etc after all. I've designed my own whitebox net that led me to many discoveries and does everything, and has no issues like no Online Learning. I've got the whole layout of how it updates its text-goals using RL. I can easily see right in my net how it does: Recognition (even with typos or context) and relational discovery, sequence prediction, segmentation parsing... It builds a world model and re-uses features, I discovered new types of Costs and how to lower them. This segments the correct parts, like Byte Pair Encoding. I've implemented my whitebox net partially + has a viz, so I can explain it easy. One reason you won't find your results in the corpus is because your output containing "a new science journal" may be inside your corpus as "the old fashion magazine". It's translated. Each word may be translated. Ex. cat=dog. Did yous use word2vec? Some words may be added/removed as I seen in a long memorization (Confederate [battle] flag...). And a whole sentence may become 1 word or vice versa. - Like, what does Guarding mean? Or a Chinese symbol mean? > "fighting aliens off from a child" = "Guarding". And there's your summarization too (one way to do it). Summarization is also opposite of Expansion (Predicting next word/s). So there's expansion/summarization/translation, and translation can do exp/sum too. There 2 main things going on here. What is What. What comes with What. Also a 3rd: What goes Where. So you need correct segmentation too, hierarchically. Unsupervision does all these. And they all help each other. It let's it generalize and predict around parts inside a sequence. A language model = can describe any concept of the World using text/vision. General (G) in AGI means that, besides translation. In vision: Filling in holes(superresolution in visionORtext) or sides(extension)/frame injection or extension/2Dto3D will increase power. Either way, I'll be right on top of the GPT2 basis for the rest of my life. A good master net design ought to do these and do so unsupervised, like GPT2. My AGI whitenet can/will do reading (nodes recognized)/generatingPredicting next words byyy - seeing what comes next or what nodes are recognized and then also the surounding context words on BOTH sides in the sentence text in the input/output being read/made narrowdown which is recognized node OR which's end to add on to the book. Same way my whiteboxnet nodes win by reading context, will also be how it predicts the next words are 'ate food' > "the duck in the water ate food". Conext is also used for doing translaion cat to dog, which doing so as said allows for better prediction of What can come with What.........SO, say you're generating output in middle of sentence or end of sentence, then, you know what comes next based on words on either/both sides plus narrowdown based on those ex. whicch duck meaning 'node' (translated, by context points/energy), and then after you get 'the duck animal' from 'duck' you can translate that into 'a swimming bird' and use what follows 'that eats fish' and then translate that by methods of sorts explaied elshwhere.................SO: context around a word/s are used for: knowing what comes next, translating it, translating in general, reading/writingOut whih duck was intendeed. Predict/summarize (expand/shrink), translation, segmentation/rearrange parts --- Armies/nanobots do too: 1) add/remove men, 2) change skil task ex. attacker becomes defender or spy, 3) reorganize who goes where by looking for x then looking for desire location entailment. predict/answer | summarize, translate/recognize, segment