Democratizing Automation
Getting everyone to benefit from the artificial intelligence boom could be more challenging than some expect.
This is a re-send of a essay from elsewhere, but it inspired me to create this blog. I hope you enjoy. Happy 4th of July for my American readers.
How do we balance the effects of AI superpowers — in the hands of few — on the eyes and lives of the rest? Through a variety of essays, podcasts, and conversations I’ve come to a set of concerns with how automation based products could further polarize our society. I write on three topics that we need to address to move intelligent systems towards fairness.
Emotional capacity of the machines and how it impacts users.
How to make data-driven systems void of bias.
What to do to mitigate the wealth-aggregation following the tech-company power aggregation.
Why do I say this is hard? Watching the nation’s lackluster response to the coronavirus shows that we can barely respond to an immediate, pressing problem. I would loop the need for democratizing automation as a risk just below climate change for the destabilization of our society. These problems are similar because they slowly creep up on us changing our way of life incrementally without us noticing the future is here.
Automation is not a zero-sum game — ie, everyone can win and gain from its implementation — but there’s sure to be some drastic costs on underserved groups and societies. I hope to do my part in building systems that help everyone, rather than systems that disproportionally help few at an incremental cost to others.
Emotional capability in AI
Modern life has put a lot of strain on our humanity by moving so many of our interactions behind a screen, and so will the intelligence boom (for adults, up to 34 years behind a screen). The revolutions of data-driven methods are still young. I am worried about what happens when we no longer get to see a smiling face at the coffee shop. I am not saying it’s the interaction people crave and need, but it’s something. It’s a connection. These small and numerous connections add up and matter in the long term mental health of individuals. I am very open about my mental health struggles that are enhanced by the technological funneling of communications. It affects everyone, and I don’t see many paths to making it less of a problem.
Will the autonomous coffee machines be able to make humans feel like they are part of the whole system? We all have Zoom fatigue (source, source) from a month of working online, what if life opens back up and screens are all we get to see — I don’t think that will be a huge win. Removing human interaction is a burden on customers too; it’s not just a cost-saving technique.
Affective AI
A company started to boost the emotional IQ of individuals with certain conditions that make it challenging to slot into normal society, Affective AI (link), holds a lot of keys in this space. The founders want to serve the underrepresented and those who have the most to gain.
I definitely recommend the episode of the Artificial Intelligence Podcast below that defines and dives in on these problems.
Rosalind Picard: Affective Computing, Emotion, Privacy, and Health | MIT | Artificial Intelligence…
Rosalind Picard is a professor at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and…lexfridman.com
Tech companies monetizing emotions
Do we want machines to track our emotions too? I see in the next few years certain social apps wanting to track eye motions to measure engagement with advertising (source one or two). What if the advanced facial recognition hardware to act as a bio-passport into our phone becomes an emotional measurement device to optimize ad-tech? I honestly don’t know and don’t hear enough people talking about this concern (source for how emotion can tap into advertising technology).
With the tech-companies able to change the terms of service whenever they want, I think it’s a matter of when rather than if. The consumers need to have a say in what is being tracked, or the feature creep will add emotions to the list of what is tracked constantly (location, browsing, purchasing, etc).
Bias free data
Data-driven approaches (also known as learning-based approaches) take in the signal they’re given and optimize some outputs.
Current bias in tech
Who spends the most on mobile devices? The affluent (primarily on Apple devices). Source.
Who has the highest success rate on facial recognition tools? Caucasians. Source.
These are just two examples of biased data already in our day-to-day lives. I would say they aren’t super impactful, but when automation systems affect more areas of our life it’ll be thought-provoking.
Future bias in tech
If AI systems are used to recruit for jobs, what if certain demographics are left out of the training set so they are never selected? Source.
In the medical system, there’s already systematic disparity in effectiveness of treatment across demographics, what happens when decision making is fed into computers? Source.
In both of these, the computers will copy the data given to them. For example, when a neural network is fitting data, its accuracy is proportional to the density of training data. There is very little work on how to address it, just a lot of white papers on how these issues will affect us.
I claim we need to work more with data handling and aggregation. I’ve talked about the data is that is driving value in companies, but we also need transparency on some of the data. Here’s a good summary from MIT Sloan on these issues of data bias.
Who steers this?
I was also happy to see that Berkeley EECS hired a new faculty interested in these issues, Rediet Abebe. Welcome! I think faculty have a large platform to voice the issues, but companies running the platforms see the potential issues way earlier. This is something that is hard to regulate, so I think it falls on engineers being aware of the issues. This is why I think the best computer scientists have a somewhat diverse background — the ability to understand broader issues.
See pushes for fairness from Facebook, Google, Amazon, and Microsoft. These are just the public methods.
Financial fairness
The first half of 2020 is going to be a great aggregation event in the technology landscape. Dried up seed funding, failed go-to-market strategies, and the random hammer of economic lockdown (source). The big get bigger (FAANG or whatever the preferred abbreviation is for the big 5 tech companies). I work in the area of automation and expect a lot of money moving there, but it’s all behind the veil of stealth mode startups and entrenched engineering companies.
Everyone talks about automation looming, but why is it so hard to find (even for experts in the area)?
My goal in to study what automation is going to catch people off guard with its unprecedented scale of impact (e.g. autonomous cars) and what will turn out to be a toy application with good PR? I want to keep writing about democratizing automation. I want to build tools and understand trends.
(Softbank had a hilarious visualization of the fall of tech Unicorns in their Q1 earnings call; source.)
AI-based Trillions in GPD Growth
Can we please make more of the American technology giants based in the United States. I’m not going to say anything about changing the tax structure, but we need to figure out a way that as the GDP skyrockets it isn’t only creating bigger billionaires. The marginal value add per digital user is astonishing. Please reach out or comment if you have good resources on ways that can normalize financial gain and retain the benefits of capitalism.
Notes from the AI frontier: Modeling the impact of AI on the world economy
The role of artificial intelligence (AI) tools and techniques in business and the global economy is a hot topic. This…www.mckinsey.com
Some areas of the country have access to amazing places without traveling far.
Technology companies are not incentivized to limit their growth by addressing these long term risks. We need effective government to moderate the long term trajectory of technology — because they’re the only ones that can take a financial risk for far-out future security.
What’s new with me
I am reading (newsletters/blogs):
Strategies for Collecting Sociocultural Data in Machine Learning. A paper addressing how we can get more equitable datasets in machine learning. Strongly recommended for any practitioner in the area.
Building AI Trading Systems - Denny Britz. A good synopsis on why reinforcement learning is a good candidate for automated trading - from someone who actually built one.
Books:
Becoming - Michelle Obama. A great story to listen to. It makes me remember that every individual’s path is so different, and those who are authentic and driven end up with great opportunities.
Human Compatible: Artificial Intelligence and the Problem of Control - Stuart Russell: Why we need more future-conscious AI designers. I was going through a part discussing making robots look human this week. This is something we should avoid in my opinion. Think about robots that care for our young, but look human. A child may mistake it for human, and mistake it for caring, and when that veil is removed there could be dire emotional consequences. Why do robots need to look human to be useful?
I am listening to / watching:
Sam Harris with Toby Ord on Existential Risk. Why? Because they discuss their philosophies on donating money and why donating abroad and automatically is likely best for impact (can’t forget it, and a dollar can go further).
Dithering from Ben Thompson and John Gruber (happy paid subscription). I haven’t plugged this yet, but the tri-weekly 15minute episodes are fantastic for following the drama and stories in the tech industry.
Find more about the author here. Forwarded this? Subscribe here.