Updates:

Pivoting content: arts and crafts, electronics, technology, and a little dose of everything else! Every Monday and Thursday (hopefully)!

Important:

Working on kyaratar.com and other projects.

Human and A.I. Learning Efficiency

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 

Humans have a learning efficiency believe it or not. Artificial Intelligence also have one, but it's almost nonexistent because those machines were made to bypass certain stages and learn extremely fast.


Disclaimer: This is just a theory and not a proven fact backed-up with credible data.

Premise:

This is article is probably not going to hit the topic home by a long shot, but I really wanted to explain a small portion of what I think about humans and artificial intelligence (AI) and how we can create AI that won't be destructive to us in the future. Awhile back, there was some articles that were an interesting read:

https://www.extremetech.com/extreme/187612-ibm-cracks-open-a-new-era-of-computing-with-brain-like-chip-4096-cores-1-million-neurons-5-4-billion-transistors

https://futurism.com/google-created-an-ai-that-can-learn-almost-as-fast-as-a-human/

http://www.snopes.com/facebook-ai-developed-own-language/

Those articles above make SkyNet, which is basically a machine ruled world of an AI, seem closer than you can imagine. One thing to take away from those articles is that they are all exaggerated. None of those machines can really do anything since they don't have a physical embodiment to maneuver in. Even if they did, they could easily shut it down with simple force. AI machines are obviously present in this day and age, it's just not released to the public yet because of intellectual property, secretive projects, and such. However, for this article I want to go over the human learning efficiency and how we can somewhat accomplish that for AI. Reading a bit more about the Facebook language incident, there was a comment about humans not being efficient subjects for learning from. Then I realized what would be the end-goal for an AI if it can't learn anymore or has mastered whatever it's purpose was set out to be. Unlike a human, it takes years for us to master a skill, but for AI, it will be instant. That's where I figured what might be missing in AI that should always be included, especially if it's going to be a robot.


Similar to the topic:

Here I found some educational links and articles that I will try to use for this article:

https://www.humanefficiency.nl/humanefficiencye.php

http://www.apa.org/pubs/books/4320417.aspx?tab=1

https://en.wikipedia.org/wiki/Harris%E2%80%93Benedict_equation

https://physics.stackexchange.com/questions/46788/how-efficient-is-the-human-body

http://earthsky.org/human-world/what-is-the-speed-of-thought

https://www.sciencedaily.com/releases/2017/05/170523083345.htm

http://artint.info/2e/html/ArtInt2e.html


Onion Learning Stages:

Firstly, we all know how humans can learn. I put together some easy to follow images, which will make sense after I explain this part first. I looked to using the onion-based model as an example. Think of the center of the onion as the starting point, and as you progress through the onion you go through layers of skin. Those layers of skin will be levels that are related to learning. Naturally as you age, you should be progressing in your life to learn new things because if you stop, you won't be good or useful overall. Within the onion, you can choose any path and those are different skills you want to learn until you reach the final layer of the onion. Outside of the onion, are more onions. Essentially, you just finished one onion in a large patch of onions. Normally, a human can only finish one onion, but an AI can do more than just one onion. Here is an image to provide you on how it kind of looks like for the layers and levels of an onion.

In the above image, there are nine levels to be gone through. I based this off of what you would normally see in real life. Most people usually don't even get to level nine. I will give examples of what each level means so it can also be implemented into AI. However, the traits I will mention for AI to have included in it's system will be mentioned after going over the learning efficiency. Returning to the manner above, most people remain at level seven and retire to live life. Level seven is the peak for most people because it's difficult to keep up with technology when your body starts to degrade. Degradation doesn't happen to AI, so this is why AI can go passed level nine. People who are at level nine, are people who still try to learn more and solve new problems in their field and teach younger people about it. Now, this could also be different if the human is missing the two traits that an AI is also going to miss if it is not included. Learning efficiency is related to this onion-style of learning because unlike other ways of learning, this way has more paths of learning. A circle is a great example, because atoms are also circular, which correlates to how a path can travel through a circle many ways. In AI, when you feed the machine datasets, it essentially learns new things and applies it. In a circle, you can still learn things on each layer, but unless you have acquired enough knowledge, you won't go to the next layer (or level). 


The Human Learning Stages:

Now I have made images to illustrate the onion-based model above. Do also keep in mind that a human does not have to travel through the onion in a linear direction. They can go in any direction as said previously until moving onto the next level.

   

Yes, there might be some typos in the images above, but hopefully you can still decipher what I mean. As you can see, the human way of learning is understandable, but for AI it has to be different becaise they are missing traits that humans have that will affect their learning. Off topic for a bit, but even human prodigies have a limit, and that's where the learning proccess for them starts to slow down even in their prime age. Moving on, people who get stuck in a level of an onion would mean that they are contempt with life already, which isn't a bad thing or need guidance to progress further. 


A Human Efficiency Equation:

Okay, so we need an equation to determine learning efficiencies. We know that there is an equation for find the efficiency of a system by making a ratio out of it multiplied by one-hundred. Well first, I don't have any data to go through for creating a constant for human efficiency. Even though Intelligence quotient (IQ) is a score commonly used, there could be better ways to assess a human and finding what their learning efficiency is. The equation would have to be related to time as well because you want the rate at which that person can learn something. I may not be able to provide a solid equatoin, but know it could be somewhat like taking an equation used in AI like a method and applying the answer of that as a perfect example for the equation of learning efficiency then sampling human trials to create a constant for human's (an average) learning efficiency and trying to incorporate those numbers to AI for testing purposes.


What AI can do for learning:

AI have many ways of learning information, but unlike humans, they don't need to waste time in the earlier stages figuring out the basic and what works best for them. They are programmed to do one specific task, and so an AI can already start at level four or level seven of the onion. Below is an illustration, before I go into some methods of AI learning and what my onion-based model could improve or be improved upon existing methods.

In the above image, you can see that the levels of the onion are illustrated at a microscopic level. As mentioned earlier, the levels of the onion have more than just white space. Each level, referred to as a node, have sub-nodes which are like sub-skills or an obstacle to go through and the AI or human can choose to go through those sub-levels as well. The sub-levels in levels like one would be easier than sub-levels at level three main node. As to what it means to an AI, an AI to be completely functional and working without failure will have to go through every type of data, in this case situation, in each level of the onion. However, how will an AI know it has mastered everything for that skill? It would have to be compared to the smartest human. That is why an AI needs to have certain traits to let it know that even if it's at the highest level of the onion it will then look for problems or potential problems and figure out how to solve it through re-iterating through new information it is given or acquired from networks, sensors, or uploads. Of course, the more you train the AI, it will perform the learning process much faster. 


AI Methods compared to this:

Some most popular methods such as GRU and LSTM, which are Gated Recurrent Units and Long short-term memory respectively, heavily rely on previous outputs and layers, but one issue is when the layers become too deep and the gradient for that learning point starts to vanish. Although I'm not an expert in AI, I will try to understand it more. However, here are some links to read into:

http://www.wildml.com/

https://www.quora.com/Are-GRU-Gated-Recurrent-Unit-a-special-case-of-LSTM

https://www.quora.com/What-is-the-vanishing-gradient-problem

https://en.wikipedia.org/wiki/Vanishing_gradient_problem

I see that the issue is still prevalent of the vanishing gradient problem. Humans can also forget information, but they have references to go to when they nearly forget information. I'm not sure if this has been mentioned so I will do that in the next section.


AI Learning Efficiency like Humans:

Understand how humans learn new knowledge. What does a human do when they only remember slightly of the information? They go back to the reference and then re-look at the data, but for a human this could take a couple hours or days, but for an AI? An AI should be able to look up a similar reference and quickly gauge all the information back into recent memory and equation to be used for an operation it has to do. I'm guessing, the learning efficiency for AI can be a value for how much information an AI should retain once it is hitting the hardware limits. This learning efficiency coefficient is scalable to the data it affects in the equation. So this means, no matter how much data there is, as long as the AI has the power to access it fast or slow, depending on how much of all the data it retains, it will still be able to go back to that node and re-read all the nodes around it, like the onion sub-levels in the levels of an onion. This is basically just a huge database, and the AI should learn how to create it's own references to different parts of the server or clustered servers or neural network. I'm not sure if that makes sense, but if there is a situation for the AI to be able to first find a similar event before trying to learn a new one, and that old data is way in the back, it can bring up similar references and quickly go through that data again for the new one it is solving. This is what humans do, when they forget, they go back, even if they remember little of what they are trying to remember. Training AI to be able to create their own references to huge datasets will be helpful. So, an AI can take a really long data value, make it smaller, and tag a reference to it, and if it needs to use it for an equation, it will go back to that tag, re-learn it and then use it, like the main thread and then creates a sub-thread for data retrieval. For AI, splitting it up into more than just one big machine with helper machines might be a solution too. Back to the onion example, an AI will be able to progress past level nine. Therefore, the AI will be able to learn new skill-sets or freely explore different things utilizing it's sensors, networks, or such as mentioned before.


What traits AI need to have:

If an AI truly does become perfect and knows everything about the subject, how will it learn and perform when humans need it? Remember the Facebook chat? What if the AI suddenly becomes aware that humans are not efficient learning subjects anymore for learning a language? AI needs to include a desire to help anyone no matter if the information they give is old news to the AI, and the AI needs to know that it must depend on each other. Those traits could be extremely vital to the end process of an AI that has mastered it's purpose of creation. Especially in robots that utilize AI, of course there might be other ways to halt the AI from learning or gathering information, but in a real-life situation: what does it truly mean to learn something to perfection? Those traits can be purposely integrated into the learning efficiency rate, coefficient, or equation. As you might notice, AI learning is kind of exponential. They start slow, then later they just rocket through the roof (well of course depending on the hardware as well). By also designing AI to always treat information as potential to be new data it can learn is also essential. This way it will never stop learning, and will never know when to stop unless told so. If we really want to create an AI that can care for it's own self with zero human intervention, we need to make sure that it understands and has a desire to help regardless of what it will gain out of it and needs to know that it needs to rely on other machines or humans, just for the sake of its existence.


Human and A.I. Learning Efficiency:

I mean right now, sure we have AI being good at one thing such as Go, Defense of The Ancients 2, and such, but AI that have the ability to learn more than just a fixed skill-set is even harder to create. Creating and comparing humans and how they learn has been done before, but implementing that into a robust AI, that is freely to learn, is difficult. Maybe I didn't make sense in this article, but I really thought I was on to something.

Comments powered by CComment