The Coming AI Autumn

Jeffrey P. Bigham

@jeffbigham

4/8/2019


tl;dr  AI hype is deflating all around us, and what will be left is a rich harvest of human-centered technical work applying Machine Learning to important problems.

===

I like to poke fun at AI hype; here’s a tweet from a few days back:

A robot did not teach itself how to play Jenga. I didn’t read the article, and still haven’t, but some humans decided to teach a robot to play Jenga. Humans (with substantial effort) made a system that would enable the robot to learn from some sort of data. Most likely data came from human trials, or maybe humans set up proper reinforcements so the robot could learn by “playing itself.”

Similarly, cars will not drive themselves by 2020. Speech recognition has
not reached human parity. “Alexa” is not conversational. Computer vision cannot answer arbitrary visual questions. We’re not at risk of self-aware killer robots.

Some see the repeated failed predictions and think another “AI Winter” could be on the horizon, but that’s not going to happen. The AI Winter happened because there was tremendous hype with not all that much underlying it (yet). These days, important work is happening beneath the hype. People think Alexa is about conversational agents, but it’s about better mics, an extensively engineered rule-based system, and, yes, somewhat better speech recognition powered by deep neural networks. Wired says deep learning is greedy, brittle, opaque and shallow, and that’s all true, but a lot of important things can be accomplished even with these limitations.

Useful AI is still more about how we formulate problems and what data we’re able to collect, than it is about fancy new models -- on the technical side today’s “AI” is as much about networking, systems, and databases, as it is about new algorithms. Ultimately, it is about humans.

Human performance serves as the benchmark for hypey AI (saying methods have reached human parity on some problem or another is all the rage). Yet, human concerns are largely ignored in hypey AI. They can only be ignored for so long though, as they come rearing back to the forefront when AI is moves toward practical application and fails to progress. Hype deflates when humans are considered. Self-driving cars seem much less possible when you think about all the things human drivers do in addition to the driving on well-known roads in good lighting conditions. They find passengers, they get gas, they fix the car sometimes, they make sure drunk passengers aren’t in danger, they walk elderly passengers into the hospital, etc.

We’re already shifting away from hype’y AI replicating human performance, and moving toward more practical human-centered application of machine learning. If hype is at the rapidly melting tip of the iceberg, then the great human-centered applied work is the super large mass floating underneath supporting everything.

Statistical Pattern Recognition and Non-deterministic Humans

A few days ago, I mused on Twitter whether the hype would go away if we stopped calling the field “AI” and instead used the much more specific and correct term, “statistical pattern recognition.”

Jeff's tweet that read, "if SPR (statistical pattern recognition) was the ubiquitous term, instead of AI, would it change the hype? … I think so, and it helps to illustrate how much hype there is."

Others have a different prescription -- Judea Pearl says we need new approaches that can do causal reasoning, Pedro Domingos is looking for “the master algorithm”, others are looking to move beyond “cognitive function optimization of animal-like abilities” and urging us to move on to human-level intelligence.

Regardless of how it’s framed, these discussions are happening because “AI” conveys an idea of intelligence -- human intelligence --, that current approaches can’t meet. Our systems lack common sense, the ability to draw analogies across domains, to reason about causality, and other components of intelligence necessary to mimic and interact fluidly with non-deterministic humans[1].

Statistical pattern recognition is nevertheless an incredibly powerful tool. To leverage it fully, we need to do the hard work of uncovering problems that are both important enough to matter and narrow enough so SPR will work well. Discovery of important problems, mapping them onto computationally tractable solutions, collecting meaningful datasets, and designing interactions that make sense to people is where HCI and its inherent methodologies shine.

HCI (and the incorporation of HCI methods by people trained in AI) is why I think AI will have not a winter but an AI Autumn this time around. People who can apply ML to solve real human problems will become the most important tech people out there. Powerful ML is increasingly captured in easy-to-use libraries; if you want to stay ahead of the curve, you need the skills we teach in our HCI curriculum.

If your goal is to fight through the winter, with hopes of someday developing truly intelligent AI, then break free from the stranglehold of deep learning and practical application, and go forth bravely.


If your goal is to reap the rewards of the harvest, study HCI.

How HCI Reaps the AI Harvest

HCI’s strength comes from a combination of disciplines -- at least computer science, design, and behavioral sciences (psychology, cognitive science, etc). Someone skilled in HCI can use a variety of human-centered methods to understand the present, design and implement futures, and validate those futures. Like in most fields, people specialize, e.g., someone might specialize in studies of current technology use by people, or specialize in designing speculative or provocative futures, or building prototypes of future technical systems for people to use.

Here are some areas where I think
HCI (and related) research and practice are poised to reap the harvest in the AI Autumn and remain relevant regardless of what advances occur in the coming decades toward truly intelligent machines:

Intelligent Applications to Support Humans

As the methods for machine learning become better understood, and better packaged into tools, the biggest challenges will become figuring out how to apply them to real human problems. This is where HCI excels!

In the very early days, the focus of intelligent machines was on Intelligent Augmentation (IA), e.g., this was what Vannevar Bush argued for in “As We May Think.” We think of Douglas Engelbart as the father of the mouse, but his “Mother of All Demos” spent much more time on how computing technology could generally augment the human intellect. Engelbart wrote about this extensively. For a while, this area was called “intelligent user interfaces,” giving name to a popular conference in the area. Now that “human augmentation” is coming back into vogue as the limitations of AI (and the impossibility of AGI) are becoming so clear, it’s worth reading this older work as many of the insights are profoundly relevant.

On-going work in HCI is figuring out the hard problems for supporting humans --  collecting and scaling new datasets, figuring out new ways for humans and machines to collaborate, inventing systems that make devices and the world more accessible regardless of one’s abilities, creating ML-powered sensing systems for interaction and for health, and working toward systems that help people better create ML models.

The challenge and impact of this area is tied to the fact that it is fundamentally about creating and solving new problems, rather than improving on solutions for existing ones. The full arc is thus discovering and validating a problem, iteratively coming up with potential solutions, prototyping and refining those solutions, and finally validating that the solution actually solves the intended problem.

As machine learning algorithms are commoditized, those who can work along the entirety of the applied machine learning arc will be the most valuable.

Design and AI

HCI folks have long been at the forefront of thinking about how humans will interact with AI, and how to do work that allows them to do so effectively. You can see this in the “agents vs. direct manipulation” debate between Pattie Maes and (HCI pioneer) Ben Schneiderman in the 90s. Ben went on to found the field of information visualization as a methodological response to how humans would could directly interact with an increasingly data rich and complex world.

People working at the intersection of AI and HCI long ago realized that there was something different about building user interfaces that include “AI” in them, especially the fact that AI was uncertain and often incorrect. Eric Horvitz and others called this “Mixed-Initiative Interaction.” You can read about that in the (now) classic paper (CHI 1999), although I also like this version that includes commentary by AI luminaries, such as James Allen (conversational interaction). Eric, along with new authors like Saleema Amershi, produced an updated take on this in their CHI 2019 paper, “Guidelines for Human-AI Interaction.”

At some point, it became a bit too common for humans to considered only near the end, which is really too late. A colleague who I’m not sure I should mention described it as, “a lot of the work has been figuring out how to slap lipstick on the the AI pig.” Thus, these days the forefront of Design and AI is understanding how designers can use machine learning as a design material. A big part of that is teaching designers how to think about machine learning. This isn’t (only) about figuring out how to present the results of uncertain AI to users in an interface: it’s about figuring out what problems should be solved, what ML approaches match up to human expectations given a problem, and which problems can be solved well enough for a particular use case.

Design is quickly becoming the differentiator between similar products; and, so, those designers who can work best with machine learning will provide the most value.

Computational Social Science

Machine learning is working its way into everything that we do, and so we need to be carefully thinking about the implications of machine learning and what we can do to mitigate its negative effects. The methods brought to us by computational social scientists tend to be more oriented toward studies of humans, using a variety of techniques taught in HCI and borrowed from foundational fields like psychology and cognitive science, e.g., surveys, interviews, log analysis, and ethnography.

These techniques have already made incredibly important insights into how users understand (or misunderstand) algorithms they interact with (e.g., the Facebook news feed), how YouTube’s recommender system may be encouraging extremism, identifying the mechanics of how false narratives spread on social media, how user interface elements impact online discourse, user perception of privacy online, and on and on.

HCI doesn’t have a monopoly on identifying or addressing issues like these, but it does have a unique position to expose and intervene given that we are also builders and designers.

===

This is just a quick rundown of related work, and areas important for the coming AI Autumn. For more, check out my Human-AI Interaction course schedule.

Summary

“Just do good high-quality work, and it will all work out, ok?!?” … in AI, broadly, it’s increasingly clear that an AI Autumn is coming, and to prepare you’ll want to skill up in the areas and methods where HCI shines.

A harvested field.


[1] This is a phrase borrowed from Gierad Laput, which we’re not sure is technically accurate, but I think does a decent job conveying to computer scientists why humans are so challenging in computer science.


This page and contents are copyright Jeffrey P. Bigham except where noted.
Blog posts are not intended to be final products, but rather a reflection of current thinking and/or catalysts for discussion, like tweets but longer.