Search This Blog

Thursday, February 12, 2015

Applied artificial intelligence: Mythological to neuroanatomical

Neuromorphic chip at the University of Heidelberg, Germany
Neuromorphic chip at the University of Heidelberg, Germany Credit: University of Heidelberg

This first of three part series exploring the depths of artificial intelligence and machine learning in 2015.


While artificial intelligence (AI) has roots as far back as Greek mythology, and Aristotle is credited with inventing the first deductive reasoning system, it wasn’t until the post WWII era of computing that we humans began to execute machine intelligence with early supercomputers. The science progressed nicely until the onset of AI Winter in the 1960s, representing the beginning of severe cycles in technology. A great deal of R&D needed to evolve and mature over the next five decades prior to wide adoption of applied AI -- here is a brief history and analysis of the rapidly evolving field of artificial intelligence and machine learning
 
Hardware innovation

One of the primary obstacles to applied AI has been with scaling neural networks, particularly containing rich descriptive data and complex autonomous algorithms. Even in today’s relatively low cost computing environment scale is problematic, resulting in graduate students at Stanford’s AI lab complaining to their lab director “I don’t have 1,000 computers lying around, can I even research this?” In a classic case of accelerated innovation, Stanford and other labs then switched to GPUs: “For about US$100,000 in hardware, we can build an 11-billion-connection network,” reports Andrew Ng, who recently confirmed Baidu now “often trains neural networks with tens of billions of connections."

Although ever-larger scale at greater speed improves deep learning efficiency, CPUs and GPUs do not function like a mammalian brain, which is why for many years researchers have investigated non–von Neumann architectures that would more closely mimic biological intelligence. One high profile investment is the current DARPA SyNAPSE program (Systems of Neuromorphic Adaptive Plastic Scalable Electronics).

In addition to neuromorphic chips, researchers are increasingly focused on the use of organic materials, which is part of a broader trend towards convergence of hardware, software, and wetware described by IARPA as cortical computing, aka neuroanatomical. The ability to manipulate mass at the atomic level in computational physics is now on the horizon, raising new governance issues as well as unlimited possibilities. The research goals of the Department of Energy in the fiscal 2016 budget proposal provides a good example of future direction in R&D:
The Basic Energy Sciences (BES) program supports fundamental research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels …. Key to exploiting such discoveries is the ability to create new materials using sophisticated synthesis and processing techniques, precisely define the atomic arrangements in matter, and control physical and chemical transformations.
Software and data improvements

Similar to hardware, software is an enabler that determines the scalability, affordability, and therefore economic feasibility of AI systems. Enterprise software to include data base systems also suffered a period of low innovation and business creation until very recently. In early 2015 several companies in the billion-dollar startup club are providing infrastructure support to AI, offering overlapping services such as predictive analytics, and/or beginning to employ narrow AI and machine learning (ML) internally. Many of us are now concerned with lack of experience and excessive hype that often accompany rapidly increased investment.
 
Database systems and applications

Database systems, storage and retrieval are of course of critical importance to AI. A few short years ago only one leading vendor supported semantic standards, which was followed by a second market leader two years ago. The emergence of multiple choices in the market is resulting in the first significant new competition in database systems in over a decade, several of which are increasingly competing for critical systems at comparable performance levels at significantly lower cost.
Data standards and interoperability

Poor interoperability vastly increases costs and systemic risk while dramatically slowing adaptability, and makes effective governance all but impossible. While early adopters of semantic standards include intelligence agencies and scientific computing, even banks are embracing data standards in 2015, in part due to regulatory agencies collaborating internationally. High quality data built on deeply descriptive standards combined with advanced analytics and ML can provide much improved risk management, strong governance, efficient operations, accelerated R&D and improved financial performance. In private discussions across industries, it is clear that a consensus has finally formed that standards are a necessity in the network era.
 
Automation, agility, and adaptive computing

An effective method of learning the agility quotient of an organization is post mortem analysis after a catastrophic event such as patent cliffs in big pharma, or failure to respond to digital disruption. Unfortunately for executives, by the time clarity is gained from hindsight, the wisdom is only useful if placed in a position of authority again. Similarly, failed governance is quite evident in the wake of industrial accidents, missed opportunities to prevent terrorist attacks and whale trades among many others.

Given that the majority of economic productivity is planned and executed over enterprise networks, it is fortunate that automation is beginning to replace manual system integration. Functional silos are causal to crises, as well as obstacles to solving many of our greatest challenges. Too often has been the case where the lack of interoperability has proven to be leveraged for the benefit of the IT ecosystem rather than customer mission. In drug development, for example, until recently scientists were reporting lag times of up to 12 months for important queries to be run. By automating data discovery, scientists can run their own queries, AI applications can recommend unknown queries, and deep learning can continuously seek solutions to narrowly defined problems tailored to the needs of each entity.

In the next article in this series we’ll take a look at recent improvements in algorithms and what types of AI applications are now possible.
 
This article is published as part of the IDG Contributor Network.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...