Reflections on digital technology from a capitol siege
The digital fallout following the siege on the capital shows we are not keeping up with the pace of technological change.
Facebook and Twitter banned Trump first, AWS pulled the floor our from under Parler, and more is to come on the digital fallout following the domestic attack.
A simple fact predicates much of this post, and it begins with a common phrase:
The internet is the new public square.
This seems fine at first glance, but there is a troubling omission here. In reality, it should read
The internet is the new public square and no two people see the same square.
No two “internets” are the same. The exercise of scrolling a friend’s curated feed on an app is illuminating. The lack of transparency on this fact is the ultimate naïveté of our technological period. It could also be described as a façade or an illusion. It’s difficult to plan futures understanding this is a reality that we are more or less stuck in — those distributing siloed information are frequently benefiting from the silos.
Last week’s events were surreal. It really reaffirmed to me the motivation behind this project and bringing more people into the goal of aligning the technology landscape with the needs of a just, fair, and blossoming society. I am still optimistic about our world’s future and believe that most of the burden for improving it will fall on individuals who are willing to make sacrifices to make a change.
Losing the public square
Everyone’s internet is different. A lot of the channels are the same, but the content varies wildly. Radicalization, polarization, and marginalization aside, the simple fact is that the digital world everyone interacts with is different. The public square was the place for discourse and productive engagement, but when everyone is fed a different information diet, that coming together seems at odds. Repairing this now seems to be intertwined with repairing our democracy.
Everyone’s internet is open. A few companies control the magnitude of influence that can be exerted both consciously and unconsciously and both directly and indirectly. Programs can target groups and individuals and they can do so with advertisements that interrupt your video or with sidebars that seep into your brain space. The only way to be free of this influence is to take substantial steps to remove yourself from the system (if anyone has good resources for doing so, I would like to up my digital distance — the self-imposed barriers for my addiction and for the internet to track me). Financial incentives run our internet and individuals have little power outside of choosing the extent by which they play.
What recovery looks like from this education-gap is not clear. Digitalization definitely accelerated the pace of user-facing change (comparing to historical technological disruption and backlash). For example, Facebook was founded in 2004, the News Feed was added in 2006, and “the algorithm” started appearing around 2009 (most of these changes were received negatively by users, but were the right choice for the financial futures of a company). We are about a decade into the content prioritization experiment. Further personalization and black-box experimentation is around the corner.
In the next decade, we will likely also be seeing individually-tuned websites with large language models. If you know a reader prefers certain political tendencies or writing styles, why not increase retention by slightly tweaking articles with machine learning? I think this is extremely economically viable but adds an entirely new layer to the problem of losing ground truth and the public square. Fine-tuning individual interests at the expense of a global consensus is a blatant example of overfitting where the cost of increased variance is on the society.
Adding layers to the problem before we have a chance to cure the original symptoms (let alone fix the problems) is an abyss that there is no way to model the effects of.
The people building our minds do not study the nature of the mind
Retrospectively, I think all the creators of the biggest social networks would have wanted more people in the room if they knew the networks they building were brain re-wiring machines. I do not think it was inevitable nor obvious that social networks would become machines competing for attention with disregard to harm to the individual.
If you take a moment to notice what is going on in your brain when you are reflexing towards your phone (with or without a notification), it is pretty chilling. A little metal and glass rectangle regularly hijacks your brain for no reason. Some of us who consider ourselves more removed from them may actually feel a bit of a fighting response and ignore which obviously echoes like an addiction (here is a recent study on phone addiction). Most people are governed by a small subset of apps.
I’m not sure if it is neuroscientists or yogis, but I feel like people other than software engineers should be in the conversation for designing systems that have reward metrics for repeated clicks.
(Note: there is a self-described tirade by Sam Harris on how you can be both nondualistically mindful and passionate about the issues presenting our society)
Re-building clickthrough
The crucial under-explored point is what other models for app-human interaction exist that also build valuable apps. Ultimately there needs to be a human model in the assumptions. The assumptions need to have some consideration that happy customers may be long term customers. Some of these decisions sound like reasons why we need government — high costs for long term gains and opportunities. How do public companies value ideals that won’t reflect on balance sheets for over a decade?
The re-building of the social networks will likely be the primary way competition emerges against the established players. It is hard to know when the next player will come but is inevitable that some of Facebook, Google, Snapchat, or Twitter become relevant to the tone of IBM. I would guess that two are pseudo-replaced within 20 years (Google is the most based on a product, but maybe I should’ve said YouTube).
Models for technological accountability
Given the power technology has, it is imperative that we take steps towards technological accountability and alignment with “humanness.” Broadly, we should make technology support and elevate what it means to be human, rather than degrade our mental pitfalls. Actualizing this ideal is way more difficult. A few ideas I have talked about with people
Clinics for computing (a wide group of colleagues and I submitted a tutorial to the FAccT Conference on this idea, so you will likely hear more): Law, medicine, and social sciences have a deep history of teaching clinics where in-training personnel interact with real-world situations to learn the trade. Historically, these clinics emerge from realizing the need for a broader education. I think it is clear that there is a need to more broadly educate the future computer science practitioners of the world. This would look like short-term projects during graduate education where students work with companies to contextualize their primarily digital products in the social domain.
Overlapping appointments when in industry: As we have seen with the firing of Timnit Gebru, the clock is now on for companies restricting the freedom of publication for scientists when it will come as a cost to their bottom line. Generating other models for research and accountability are crucial. While a revamped computing infrastructure in the US academic academy would be great, we may need other options. An idea I converged on via a short conversation with Dylan Hadfield-Menell came to the idea of: can industry researchers have small joint appointments at academic institutions or nonprofits? This would flip the usual model of 80% academic, 20% industry to something like 80% industry, 20% something else to maintain freedom. This could potentially take the shape of a philanthropically backed organization (important to be independent from Big Tech, as many of the AI think tanks are funded by large corporations).
For another post on the changing landscape of the internet and politics, see this piece from Ben Thompson.
If you want to support this: like, comment, or reach out!