Only a short time ago, the idea that machines might faithfully replicate the human thought process was, for most, still a bizarre trip into science fiction. The act of imagining clusters of silicone pumping out synthetic thoughts may have once been confined to the mind, but no more. Now, innovations in programming, mathematics, and computer engineering are being applied to plug the gap between the self-aware, arguably conscious artificial intelligence which fiction has proposed, and the AI tech we can actually create.
Like all technology, AI began as an idea spurred on by computers’ increasingly powerful processing capability. Its blueprint sketched through breakthroughs in computer programming techniques, which allowed for base mimicry of human logical patterns. Decades of technical tweaks and increasingly sophisticated programming practices granted AI the ability not only to analyze, but learn, transforming intelligent tech from an interesting parlor trick into a dominant software paradigm with extensive potential for real-world application in logistics, data mining, medical diagnosis, and of course, personal computing.
A landmark year for AI was 2016. We saw billions invested in the AI industry, and more than 20 independent AI companies acquired by Apple, Intel, and other industry movers. And, in addition to Google, Microsoft, IBM, and Amazon combining forces to form the “Partnership on AI,” we witnessed machine learning heavily integrated into the tech giants’ digital services to better interpret browsing habits and personalize search results. A particularly impressive example of 2016’s online AI optimization involved a boost to Google’s translation software, which can now instantly interpret nuanced meanings behind combined phrases, as well as definitions of individual words.
Though 2016 may have been major, experts predict 2017 will be no less than revolutionary for artificial intelligence tech. AI program AlphaGo’s 2016 victory over world-renowned Go (a Chinese strategy game many times more complex than chess) champion Lee Sedol was made possible with through a computer learning method known as “deep reinforcement learning.” Deep reinforcement learning essentially allows computers to associate positive results with certain paths taken via trial and error; it removes the need for instructions or even examples, as machines can simply repeat scenarios using different approaches until a desired result is achieved. This year of 2017 could likely see deep reinforcement learning applied to technologies, such as industrial robotics and self-driving cars.
Other likely AI developments in 2017 include improving AI programs’ capacity for language-based learning and creating remarkably detailed visuals, using generative adversarial networks, which fabricate new data based on truths and falsehoods indicated by an example data set. China is also set to stake its claim to the AI landscape, as Chinese leadership has sworn an additional $15 billion to strengthening the country’s fledgling AI field.
Even in its infancy, AI tech has already changed the way we live. It’s interpretive abilities have augmented how we experience the internet, operate machinery, and even diagnose and treat illness. I believe AI’s metamorphosis from fiction to fact will continue to alter and dictate our interactions with technology throughout 2017 and beyond.