Artificial Intelligence (AI) is nowadays the most trending topic in IT and data science. A significant number of data scientists focus on the development of AI systems such as Machine Learning and Natural Language Processing. At the same time, there is huge demand for AI systems and applications in almost every sector of the economy, including for example finance, manufacturing, defense, security and healthcare. In this context, there is however much confusion between true AI systems and systems that claim to support AI as a marketing buzzword. In simple terms, AI is defined as the capability of machines to imitate intelligent human behavior and solving problems much in the same way humans do. There are for example systems that think and reason in ways that are similar to humans, in order to solve problems like loan applications’ processing, diagnosis of diseases, credit risk assessment and more. Beyond, simple definitions of AI, it’s worth noting one of the key factors that differentiates AI systems from conventional IT systems, which relates to learning. In particular, the majority of IT systems apply fixed and given rules on input data in order to produce some useful output for their application at hand. On the contrary, AI systems are not deterministic: They are able to extract and learn rules, when fed with appropriate data i.e. an AI system learns how to behave based on data, instead of relying on a pre-existing set of rules.
Overview of Artificial Intelligence Systems and Technologies
There is a large number of different AI systems that exhibit the above-listed properties. Prominent examples include:
- Machine Learning (ML) Systems: These systems encompass algorithms that are can recognize patterns and extract knowledge based on their training using large datasets. In principle, ML systems learn from past data, much in the same way humans learn from past observations.
- (Deep) Neural Networks: These systems comprise special types of machine learning algorithms that mimic the operation of human brain and that are sometimes able to identify very complex patterns of knowledge, especially when trainied on very large datasets. Although Deep Neural Networks are a subset of ML that is conveniently called Deep Learning (DL), data scientists and enterprises tend to refer to differentiate them from other forms of ML.
- Natural Language Processing (NLP): These systems exhibit intelligence based on their ability to understand human languages and extract insights from them. Likewise, NLP systems can in several cases interact with humans and/or other systems using human language.
- Computer Vision Systems (CVS): CVS systems are capable of extracting useful patterns from one of more images, including sequences of images that are part of animations or video. Based on CVS systems it’s possible to develop agents that can analyze scenes in images as a means of identifying the contents of the image, their context and their dynamic behavior.
- Cognitive Search Systems: Cognitive search systems enable the assessment of complex situations and boost human decision making. As part of their operation, they typically collect, analyze and contextualize different types of data using rule-based reasoning or leveraging on machine learning algorithms that are trained on available datasets.
These systems are often embodied into hardware devices and cyber physical systems that enable AI operations. For example, ML and DL algorithms are usually integrated within robots and drones in order to enable their autonomous operations. That’s the reason why all smart objects that exhibit autonomy are classified as AI systems as well.
Why AI Now?
AI concepts and applications are not however new. Technologies like machine learning, data mining, cognitive search, along with AI-based systems like expert systems and fuzzy logic systems have been around for decades. During the last five decades, there has been an evolution of AI, which is evident in some major milestones, such as IBM’s Deep Blue victories over world chess champion Garry Kasparov in 1997 and the rise of the iRobot Roomba automated vacuum cleaner. However, it’s only during the last decade when their performance has improved to an extend that enables their practical deployment and adoption. This is mainly due to the following factors:
- Abundance in Computing Power: State of the art computers are orders of magnitude faster than the computers used ten or twenty years ago. Hence, it’s currently possible to process very large datasets in short times, as needed for the training and fast deployment of AI algorithms.
- Rapidly Declining Storage Costs: Nowadays it’s cheaper and easier to store very large amounts of data, which is a key prerequisite for successfully training and deploying non-trivial AI algorithms such as deep learning.
- Surge in Data Availability: In recent years, there has been a rapid proliferation of the available datasets, which is propelled by the increase in the number of internet connected devices, as well as by the rise of social media and user generated content.
- Increased in AI-related Investment: In recent years much more money and resources are invested in AI when compared to the part. For example, during the last years, Venture Capital and other innovation boosting investments in Machine Learning increase at a rate of $5-10 billion per year.
These drivers will continue to boost the evolution of AI. For example, the number of internet connected devices is still proliferating, while computing capacity is still improving at an exponential pace as per the famous Moore’s law. Likewise, new computational concepts like quantum computing (i.e. a computational paradigm that exploits the capabilities of quantum physics) will be also increasing their capacity in unpresented rates that exceed those of conventional computers.
Challenges and Future Outlook
AI is expected to affect nearly all the areas of our socio-economic life. AI programs are already penetrating all economic sectors, offering automation and eliminating error-prone, human-mediated processes. There are predictions that AI will soon replace the vast majority of procedural and laborious tasks, resulting in millions of jobs being lost or replaced by computers. This creates significant social concerns about people made redundant and asks for new policies regarding work, education and social policies. Historically we know that innovations eliminate jobs, yet they create new ones, ending up in a positive balance. Nevertheless, it is also argued that the AI revolution might be different, as it does not only replace laborious tasks, but rather mental tasks as well.
AI’s impact on the job market is only one of the challenges of AI deployments. There are also challenges related to privacy and data protection, as AI systems rely on the collection and processing of very large amounts of data. Likewise, there are also ethical challenges, as it is debated whether autonomous AI systems will be able to behave ethically at all times. Another concern relates to security, as the hacking of AI systems like autonomous vehicles can in several cases have life threatening implications. All these challenges need to be addressed based on proper technical measures and based on a set of organizational schemes and socio-economic policies.
AI’s future is exciting, as new science break-throughs will expand the scope, functionality and flexibility of AI systems. For example, nowadays, most AI systems are domain-specific i.e. focused on applications for which adequate amounts of data and context is available. In the future, it’s likely that generalized AI systems that can learn general concepts and repurpose themselves across different applications will emerge. The rise of generalized AI will provide a whole new range of opportunities, yet it will also intensify the above listed challenges (e.g., ethics concerns) as well. Therefore, companies had better prepare themselves for the adoption of AI on the basis of a coherent strategy, while anticipating the next generation of AI applications.