In the past year, we have seen exciting developments in what Artificial Intelligence has to offer. In 2013, Artificial Intelligence started to dominate simple Atari games. Fast forward six years to today, we see it dominate real-time strategy games like StarCraft. The AlphaStar created by Google’s Deepmind company was able to compete at the professional level in January this year and went on to achieve Grandmaster level by October. The improvement achieved in less than a year is amazing!
OpenAI, an organization devoted to developing Artificial General Intelligence (AGI) has managed to train computer agents to play hide-and-seek and, if you check out the YouTube video, you will be amazed at the strategies they could come up with, for instance using blocks to barricade themselves.
There are other areas that have come to the attention of the Artificial Intelligence community. I will be discussing them below to help the readers understand what they are.
Artificial General Intelligence soon?
One question I get asked a lot during any discussion with the community or general public is, “Are we going to have human-like intelligence soon?”
I have been doing research into AGI in recent years and been following media reports on Artificial Intelligence for quite a while. The vision painted seems to be that we will be able to develop human-like intelligence soon and we humans will be out of job in the very near future. My short answer is, I do not think we will be developing AGI that soon. My guesstimate is that I will not (sadly and unfortunately) be able to see it during my lifetime, i.e. in the next 40 years. There are still many areas that need to be developed first before we get to AGI. For instance, knowledge representation and abstraction are still being developed and have not been integrated into the current “intelligence” that we have built.
The current level of Artificial Intelligence that scientists have been able to achieve is called “Artificial Narrow Intelligence” or ANI. ANI is very good at performing a single well defined task which means that more and more such tasks can and will be automated.
Let me use an example to illustrate the difference between ANI and AGI. An agent with ANI can only make coffee in Kitchen A and in no other kitchen. However, an agent with AGI can not only make coffee very well in Kitchen A, it is also able to make coffee in any other kitchens from the learning it has acquired from making coffee in Kitchen A.
The interested reader can read my previous blog post on the different definitions here.
Jobs-wise, readers do not have to worry as our jobs usually consist of many tasks and the boring tasks usually have a chance of being automated. This means we can look forward to having higher value-added (and usually more interesting) tasks/projects to work on. Better productivity hopefully translates to better pay!
AI Ethics & Explainability
With the proliferation of AI into our business processes and moving more decision-making power over to AI, we start to see AI having more influence and greater impact on consumers and clients. With that in mind, businesses must start thinking about how their use of AI can impact their customers, both positively and adversely. Customers, especially the more disgruntled ones, are going to start asking more questions on how the decisions affecting them were made. As such, business organizations need to understand how the decisions are made inside the machine learning models they deploy in order to be able to provide the explanation. This is very challenging given that biases can creep into many aspects of the model training process – during data collection, model training and validation stages etc.
Moreover, most of the machine learning models in use, such as deep learning and support vector machines, are not built to show their inner workings with full transparency. For instance, most instructors/lecturers tend to call neural networks black boxes to avoid the tedium of explaining the mathematics behind them. Even with an understanding of the mathematics, neural networks are not transparent enough to reveal how they make their decisions.
As the general public gets more educated about AI, questions on how decisions are made will come in fast and furious. To avoid a PR disaster or damage to brand reputation, businesses should start looking at the use of data and machine learning in their processes, understand its impact and avoid any foreseeable biases in their data and machine learning process.
Data Privacy & Security
Many businesses have jumped onto the data bandwagon. With tons of data being collected to develop AI models, maintaining privacy and security will get more complicated and at the same time be a challenge that cannot be avoided. The good news is that there are technologies being developed to protect individual privacy and security by restricting data exchange between different parties while still being able to train up reliable machine learning models, including differential privacy, federated learning, etc. Companies can start looking at these technologies to overcome their data privacy and security challenges. Those technologies usually draw on research in both the machine learning and cryptography fields. Talents with a good blend of multidisciplinary skills would help to better leverage those technologies’ power.
When it comes to the research front on Artificial Intelligence, I believe research are starting to plateau and breakthroughs might get fewer going forward. Even the Head of AI at Facebook is saying that too. Having said that, I believe that many businesses can still take advantage of Artificial Intelligence given that there is a symbiotic relationship between technology improvement and data collected. To take advantage of them, both quantity and quality of talents are going to be important. Given the current circumstances where many countries believe there is a strong need to dominate (or at least be very good at) Artificial Intelligence to attract investments and bring the economy forward, there will be very high demand for AI talents, and this is especially true for the very experienced data/AI scientists.
Businesses, both small and large, if they are serious about building Artificial Intelligence capabilities should start planning out their talent roadmap, how to identify, attract and retain the best talents. It will not be easy given that businesses may have to compete with technology firms followed by financial institutions for the talents, but I believed it can still attract its fair share of talents with a good roadmap, outreach, project types and management. All in all, moving into 2020 and beyond, the war for AI talents will get more intense.
For the next few years, with the increase adoption of Artificial Intelligence (i.e. ANI), there is going to be a stronger emphasis on understanding how decisions are made by Artificial Intelligence (i.e. ethics, transparency and explainability), stronger measures in privacy & security needed and a VERY intense war for the best talents.
These are what I have observed so far and I will be more than happy to discuss these different areas further.