In our continuing series Smartphone AI, this article is about Smartphone Artificial Intelligence for Beginners, specifically because today’s mobile technology is dramatically changing the way we live and can be complex to understand. We want you, the reader, to have a clearer understanding of that if you are a beginner and interested.
2017 was the year Artificial Intelligence (AI) was introduced in smartphone onboard hardware integrated circuit components. Before that, smartphones were accessing artificial intelligence through cloud-based entry and still do.
Smartphone onboard hardware integrated circuit making Artificial Intelligence available on your phone uses System-on-a-Chip (SoC) for a single Microprocessor Chip for miniaturization. This design is opposed to a home computer, which may use all of these chips separately on a circuit board. These systems, architecture, and terminology are changing rapidly due to accelerated innovation and design engineering.
- IC – Integrated Circuit also referred to as a Chip or a Microchip
- CPU – Central Processing Unit – Normal Computer Processor or Microprocessor Chip
- GPU – Graphics Processing Unit – For Graphics & Images
- NPU – Neuromorphic Processing Units – Artificial Intelligence Processors called Neural Engines or On-Device AI Engines
- FPGA – Field-programmable Gate Arrays
- ASIC – Application-specific Integrated Circuit
- IPU or ISP – Image Processing Unit Or Image Signal Processor
Smartphone Artificial Intelligence Uses
Many of the onboard Artificial Intelligence features now available listed below are integral to smartphones. Still, it is actively changing how we function as a species because of the smartphone system’s mobile nature. To have a search engine computer, language translator, virtual digital assistant, and camera, among other things included with your telephone, is unprecedented.
- Mobile Apps
- Virtual Digital Assistant – Siri, Google Assistant
- Voice and Speech Recognition
- Language Recognition and Translation
- Object Recognition
- Real-time Human Activity Recognition
- Extended Reality
- Augmented Reality
Smartphone Artificial Intelligence (AI)
Apple, Samsung, Google, and the other leading cell phone manufacturers are hard at work understanding this AI technology and utilizing it effectively on their devices. One of the things they are doing is adding powerful artificial intelligence chips to their phones as we advance. AI exists on smartphones now in a basic way, such as the Google algorithms. But these new AI chips are set to be far more powerful.
Edge-AI technology is the force behind this, and it will be in nearly every new cell phone model by 2022. The technology essentially turns out smartphones into mini learning computers. Of course, they will still need to be operated by us, but the belief is many of their functions could be automated, voice-controlled, and work seamlessly. The idea is they will also learn and adapt to our habits allowing us to cede more and more control and trust over to them in helping us organize our lives.
Artificial Intelligence Makes You Nervous
While this has some people worried, especially regarding data sharing, surveillance, and privacy, the idea is not to let the smartphone control us but give us more options for controlling them. Smart homes are a great example. Imagine you return home from work at 5:30 pm every day and have started using your NEST app to turn on your air conditioning or central heating, depending on where you are in the world.
Of course, you can automate this via your app, but if you forget your phone, you realize AI can do this for you. There may be times when we don’t want it to do that, so the phone may be able to check the temperature outside first, then using your home’s thermostat, check the temperature there and make a judgment on whether or not it should turn the heating on for you.
This is just an example of what artificial intelligence on a smartphone could achieve. It may simply be more useful for your phone to alert you rather than deciding for you. But it’s fascinating to speculate about what this technology could bring.
What Artificial Intelligence Is Not
Artificial intelligence is not necessarily what we think it is and is something we’ve all heard of. Still, some know very little about it, especially about its use in our Mobile Smartphones. In our previous blog about artificial intelligence, Smartphone AI: Artificial Intelligence and Popular Culture, we discussed some of the misconceptions about the concept and how these misconceptions have been fuelled by popular cultures such as movies and TV.
Artificial intelligence is often viewed as a computer system that becomes self-aware, deduces human beings are a threat then sets out to destroy us. It’s a TV and Hollywood plotline that has been done time and time again by many different franchises. Could this become a reality? Yes, the answer is yes because humans do not yet know the full capacity of what is possible with Artificial Intelligence.
Other misconceptions about artificial intelligence are that it’s a disembodied voice Virtual Digital Assistants like Siri, Google Assistant, Alexa, or Microsoft’s Cortana (the latter actually being based on a fictional artificial intelligence character from the Halo game series) that are connected to devices such as Apple, Google, and Amazon products.
The reason for such misconceptions is each device seemingly has real intelligence and a personality to boot. Although they don’t have the equivalent of a human personality, real Artificial Intelligence Machine Learning continuously gains knowledge.
Machine Learning Teaching Itself On Your Smartphone
According to Apple.com/Siri is always learning, as are the other Virtual Assistants. In truth, Siri, Alexa, Cortana, and Google are different artificial intelligence levels with varying support systems. And as impressive as they are, they are at this stage based on programming and how much data they have to work with through machine learning.
These companies like Google and Apple are giving your Smartphones, Machine Learning Artificial Intelligence program massive amounts of data accumulated from your behavior daily. Because Google’s search engine market share is more than 92% of all search engines on the planet and Google has almost 4 Billion users worldwide, there is no comparison to any other company in data collection. Finally, there over 2 Trillion Google searches per year.
We expect these Artificial Intelligence databases to improve eternally, and who knows, maybe true Strong Artificial Intelligence will play more of a role in what they offer one day soon. The day will come when our smart assistants really can think for themselves and make educated judgments as Strong Artificial Intelligence evolves. That opens some tantalizing prospects when it comes to future smartphone technology.
Most Everything The Average Person Knows About AI Comes From The Movies
Everything we think we know about artificial intelligence for the average person has probably come from movies, TV, games, and books. Unfortunately, we’re not on the cusp of having a robot friend or a machine’s disembodied voice with the intellectual capability equal to a human’s controlling our homes and making occasional snarky comments as Iron Man has in his.
The reality is that current artificial intelligence is a database; it is a tool with many parts that will someday think and learn instead of a characterized consciousness. But that doesn’t mean the reality isn’t just as interesting or exciting.
Microsoft Support Phase Out Of Cortona App for Mobile iOS and Android
Microsoft has started the process as described below of phasing out support for the Cortana app for mobile (iOS and Android) over the long haul, but this says a lot about the competition between the Artificial Intelligence assistant, and the gaps keep widening.
“As we make this shift toward a transformational AI-powered assistant experience in Microsoft 365, we need to adjust our focus areas of innovation and development to give our customers assistance where they need it most. As a result, we are making changes to some U.S. consumer-centric features and functionalities with lower usage.
The first change is to end support for all third-party Cortana skills on September 7. Then, in early 2021, we’ll stop supporting the Cortana app for mobile (iOS and Android) because you can now manage your calendar and email, join meetings, and do so much more via our new productivity-focused experiences — like the Cortana Windows 10 experience, Cortana integration in Outlook mobile, and soon Cortana voice assistance in the Teams mobile app.“Microsoft Support – Upcoming changes to Cortana
So What Is Smartphone Artificial Intelligence?
Now that we’ve clarified exactly what artificial intelligence is not, let’s take a more in-depth look into what it is in the Smartphone world and explore its history a little. After all, as technology evolves and more devices pair and works in conjunction with one another, knowing what artificial intelligence truly is and what it means could be helpful and interesting to know.
But before we do, let’s answer the question. What is smartphone artificial intelligence? Smartphone AI is essentially any hardware (such as a specialized processor chipset) and software that allows a device, in this case, a smartphone, to monitor, select and remember and learn its user’s habits to provide them with a better, more tailored overall experience.
On a basic level, this is happening right now. Your phone is monitoring your likes and dislikes, and by doing so, it can help you find things you’re interested in and keep you away from something you’re not. Let’s use our Google app as a primary example. As we keep our cell phones logged into our Google account, the Google artificial Intelligence software on our phones, the app, and Google themselves constantly tailor the service they provide me to make sure we continue to use that app and not a competitor’s.
Choices We Make Monitored By Artificial Intelligence
Both Safari and Mozilla also provide search engines that essentially offer the same service, but we have assumed it is our choice as many prefer to use Google. In many ways it is, we still have free will and we can delete the app if we wanted to.
Many do genuinely prefer their app as a web browser, but the question is, why do they prefer it? It’s because in the time their account has existed on that app, artificial intelligence has helped them cultivate not only a useful search engine but a dashboard of the various things they enjoy. It’s also learned (with a little prompting from the user) what they dislike and do not want to be shown.
It knows the sports a person follows and athletes, as well as the movies and TV shows they enjoy. It also knows if you have an amateur interest in astronomy and will often direct you to New Scientist and other places where neat astronomy stories may have broken.
It knows we have an interest in current affairs and politics but has learned only to show us these stories on our cell phone and not our iPad. Even though both devices are logged into the same Google account, it appears as if it’s figured out we like to relax when we use a different device and don’t want to read about politics. Where on our phone it knows we like to be updated regularly about what’s going on. This is only a basic taste of what current smartphones can do when they try to learn our habits. Imagine what they could do with more power and possibly independent thought?
Artificial Intelligence (AI) Glossary of Simplified Terms
- “Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform—in a specific order—to carry out a specified task.”
- Applied Artificial Intelligence – Also known as advanced information processing, produces commercially viable “smart” systems.
- Artificial Narrow Intelligence (ANI), Weak Artificial Intelligence, or Narrow Artificial Intelligence – Computer programs can generate intelligent results in well-defined narrow areas focused on specific tasks.
- Artificial General Intelligence (AGI), or Strong AI – This type of Artificial Intelligence is defined as capable of understanding human motives and reasoning, which has not been achieved.
- Artificial Superintelligence (ASI) or Self-aware Artificial Intelligence – A machine with intelligence on par with a human’s general intelligence with ultimate infinite intellectual capability and growth potential.
- Cognitive Simulation – Computer programs are used to test hypothetical mental theories about how the human mind works.
- Deep Learning – “also known as deep structured learning, is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised, or unsupervised.”
- Expert Systems – Expert Systems are a computer program knowledge base to solve complex problems and use an inference engine. The Inference Engine is a component of the system that applies logical rules to the knowledge base to deduce new information.
- IoT – Internet of Things – Connecting the physical world to a computer or mobile device via the Internet. Artificial Intelligence applications are intricately tied to the IoT.
- Reactive AI – Reactive Machines or Reactive Systems is the oldest form of artificial intelligence. These systems have a singular purpose, no memory, and no understanding of the world outside its function.
- Limited Memory Machines – Limited memory machines are capable of learning from historical data to make decisions.
- Machine Learning (ML) – A subset of artificial intelligence, Machine learning is the study of computer algorithms that improve through experience.
- Natural Language Processing (NLP) – AI dedicated to teaching computers to understand written language.
- Neural Networks (NN) – A computer system modeled on the human brain and nervous system.
- Visual perception
- Language processing
- Financial analysis
- Programming Language – Programming Language is a formal language comprising a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms.”
- Self-aware AI – Hypothetical AI, which, self-explanatory, is an AI that has evolved to be so akin to the human brain that it has developed self-awareness.
- Theory of Mind (ToM) – Term from the field of psychology in relationship to Artificial Intelligence as an assessment of an individual human’s degree of capacity for empathy and understanding of others.
Where Does The Story Of Artificial Intelligence Begin?
According to Adrienne Mayor, author of Gods and Robots: Myths, Machines, and Ancient Dreams of Technology, humans have always been fascinated with the concept of artificial intelligence. Even before computers existed, the idea of artificial intelligence was something that had captured the public consciousness on many different occasions.
Although the first home computers arrived in 1977 as a kit, the average household obtained their first family personal computers during the early to mid-1980s. Computers had existed for decades, but this was the first time where they started to become mass-produced and sold to people all over the world. Before this, computers were only seen in offices and movies. Gradually more and more appeared.
As we discuss in our article, Your Cell Phone Was Born in the U.S. Military; the first handheld “Brick” Cellular Phone was introduced by Motorola’s Dr. Martin Cooper. He made the first call on April 3rd, 1973.
World’s First Smartphone?
“IBM debuted a prototype Smartphone device, code-named the “Angler”, on November 23, 1992, at the COMDEX computer and technology trade show in Las Vegas, Nevada. The Angler prototype later to be called the Simon combined a touchscreen mobile phone and PDA – Personal Digital Assistant into one device, allowing a user to make and receive telephone calls, facsimiles, emails, and cellular pages. Not only did the prototype have many PDA features including a calendar, address book, and notepad, but it also demonstrated other applications such as maps, stocks, and news. This was possibly the world’s first smartphone.”Tech Evaluate – Your Cell Phone Was Born in The U.S. Military
Since then, it has been the norm for many in the developed world to own their own personal computer, and you would be hard-pressed to find someone who does not have at least one laptop as well as a tablet and smartphone in their household. And this is how it remains today.
Malevolent Artificial Intelligence – The Terminator
But even in the early 90s, before many of us had really had a chance to touch an actual computer, we still knew about artificial intelligence. Perhaps this is because of James Cameron’s 1984 film The Terminator, the film that not only made artificial intelligence, something everyone started learning about but created mythology where artificial intelligence was something to fear.
Skynet was a malevolent artificial intelligence in that movie, intent on the destruction of humanity to preserve itself. The series has become a big-budget blockbuster franchise since then, but Skynet arguably remains the most famous example of artificial intelligence in fiction, despite them now being nearly everywhere in sci-fi franchises. We can thank Skynet (and the movie that created it) for helping us as a society first learn about the concept of artificial intelligence. But we should also be very wary of what Skynet presented us with as an interpretation of what artificial intelligence is and what it one day soon could be. A vision that’s become ingrained in our culture.
Quick experiment, next time you’re out for dinner or drinks with your friends, mention the concept of artificial intelligence. Let’s see how long it takes for someone to mention Skynet, The Matrix, or killer robots of some description! Believe it or not, our interest in artificial life becoming sentient is something that goes back even further. Stories of automatons appearing in the ancient world are well known to history scholars and have been documented as appearing in ancient Greece, Egypt, and India.
Computing, Science And The Rise Of Artificial Intelligence As We Know It
Alan Turing published a paper in 1950 named “Computing Machinery and Intelligence,” in which he describes “The Imitation Game,” “Constructing a Thinking Machine,” and Digital Computers. The Turing Test for Artificial General Intelligence today is what is used to define if we have achieved the “Singularity” with computing consciousness?
Most of the time today, when you hear talk of Artificial Intelligence progress, you will undoubtedly hear the question asked – did it pass the Turing Test?
The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic conference on the subject. The term in his 1955 proposal for the 1956 Dartmouth Conference, the first artificial intelligence conference.
In the 1940s and 1950s, scientists, engineers, and anthropologists started to research the idea of Artificial Intelligence and wondered could something ever be created that artificially replicated the human brain?
Could we engineer something that thought and behaved as we do? The argument was that as the human brain works by using neurons and synapses, could we replicate this by creating a neural network that emulated our own? The research went on for a long time and would go on to help give birth to modern computing that we still use today, becoming the concept of traditional artificial intelligence
1956 Dartmouth Conference
It was in 1956 at the Dartmouth Conference where things really progressed, and it was here that the term artificial intelligence was first recorded as the name of that field of study, although chances are the term existed much earlier and was suggested as an official title. The research into artificial intelligence was planned out and continued for the next twenty years with various advances and setbacks.
The Artificial Intelligence theory was well understood and mapped out, but one of the problems in developing sophisticated artificial intelligence was the sheer lack of computing power available. By 1976, robotics specialist Hans Moravec came to a conclusion in his paper “Intelligent Machines” that computers themselves needed to progress before artificial intelligence truly could, and funding, as well as an academic interest in the idea, started to wane. This gave rise to an era known to those involved as the AI Winter.
“A major reason for the difficulty has become obvious to me in the course of my work on computer vision. It is simply that the machines with which we are working are still a hundred thousand to a million times too slow to match the performance of human nervous systems in those functions for which humans are specially wired. This enormous discrepancy is distorting our work, creating problems where there are none, making others impossibly difficult, and generally causing the effort to be misdirected.”Hans Moravec – Intelligent Machines
Later in 1995, Hans Moravec, in a Wired article “Superhumanism,” would predict that by “2040 robots will become as smart as we are. And then they’ll displace us as the dominant form of life on Earth.”
Artificial Intelligence in the 1980s
But by the 1980s, interest in artificial intelligence as a field of study gradually returned. This was certainly a possibility for renewed interest, but it’s believed that a breakthrough came through something called the “expert system.” What this was, was very complicated, but it basically allowed a computer program to answer a question using logic, but it had a knowledge base on which to derive that answer or at least as much as it could.
This system, alongside new advancements in computer processing power, enabled artificial intelligence to help people find answers to complex questions, providing the answer that existed within that computer’s data. It at least saved time; before such a breakthrough, we would need to consult books or papers to find such information.
Imagine this system in place but with access to the internet? To be fair, it all sounds exactly like what Google does today. We input something we need into Google, and it fetches that data based on a certain algorithm. It’s algorithms like this that enable all artificial intelligence.
It is the same principle on your Google account your smartphone uses to work out what you want to see on your home page and what you don’t want to see. There were lots of other advancements in the field of artificial intelligence and a lot more setbacks too. There was even a second AI Winter from 1987 -1994.
Three General Principles of Artificial Intelligence
With the rise of the internet and the Home Computer, artificial intelligence was here to stay. There were several concepts and areas of study that helped it develop into what it is today. They are categorized into three general principles when applied to modern artificial intelligence.
- Deep Learning – Deep Learning is the first of these, which in a nutshell means creating algorithms where a machine uses patterns to learn and remember. Making it more likely to reach similar conclusions in the future and reject that which isn’t reflective of this.
- Big Data – Big Data is the second concept and is an extension of the Expert System in many ways, but this time the data is too vast to be stored and managed in a conventional way. For example, the internet is a space where infinite knowledge theoretically can be stored; one computer or program couldn’t hope to capture it all. So, the big data principle allows artificial intelligence to analyze such a vast quantity of data and make sense of what it needs. Google anyone?
- General Intelligence – Finally, there is General Intelligence, a term used to describe the overall progress of artificial intelligence and the goal of getting it to a stage when it can replicate human comprehension and, dare we say, attain actual consciousness.
Combining all three of these methodologies allows computer scientists to continue to refine artificial intelligence until it’s something that can help us more in our day-to-day lives. Of course, what’s described above is a very basic outline of these ideas. After all, this article is mainly about how smartphones utilize artificial intelligence. Still, it gives us an insight into how it works and how it’s improving all the time to the point where artificial intelligence is about to break through and become much more of a staple of smartphone technology.
How To Train Your Robot Through Smartphone Artificial Intelligence
Now that we know how artificial intelligence has developed, how it works, and how it doesn’t work, what do we have to look forward to? Well, it goes far beyond voice-controlled assistants and has much more to do with how the phone thinks, learns, and caters to its user.
As we said in the beginning, 2017 was the year Artificial Intelligence (AI) was first used in our smartphones. Not just in the cloud but in the smartphones themselves.
Think about it; how better to “Train your Robot” than carrying it with you 24/7? That is your Smartphone. Then have Google or Apple look at where you are every second of the day by knowing your location, exercise habits, thoughts, feelings, and interactions with others. Google and Apple want to know what you look at, what you study, what you listen to. They want to know everything about you for their business and profit and train their robots.
It looks like the idea for artificial intelligence on smartphones is for our devices to get to know us in new and intriguing ways. Allowing them to create a deep built-in profile of who we are in an effort to help make our lives richer and more convenient. Of course, there are inevitable concerns about data security and privacy. It will be up to providers and manufacturers to assure consumers that this technology is safe to use and doesn’t compromise our basic rights.
But if this is the case, and they are able to maintain a balance between privacy and data-led convenience, we may be about to see an evolution of the smartphone into something more. Even referring to the devices as smartphones may soon become outdated if this technology really does offer what we expect it may. We’re not so sure exactly what the devices will be called, but if we can pick our own names, I’m calling my next phone Jarvis.
“The strong AI mainly deals with developing systems that can reason or can think or act like a human. Then the other kind is the weak AI, which is more focused on specific tasks. And this also includes virtual assistants. We’re still very far away from strong AI.”Why Apple’s Siri isn’t as smart as Amazon Alexa and Google Assistant