The Collingridge Dilemma and Current Policy on Robots
Legal cases impacting robotics, innovation vs policy, the sci-fi future of robotics.
To start, what is the Collingridge Dilemma (from Collingridge’s 1980 book The Social Control of Technology), from Wikipedia:
is a methodological quandary in which efforts to influence or control the further development of technology face a double-bind problem:
An information problem: impacts cannot be easily predicted until the technology is extensively developed and widely used.
A power problem: control or change is difficult when the technology has become entrenched.
I translate, as follows: when technological innovation is rapid, it seems dis-proportionally difficult to enact useful policy (control). The dual is, when innovation is slow, policy seems to be simpler and less restrictive. The effort to enact policy is proportional to the rate of change in technology.
The policy-challenge rate-dependence males it very difficult to put the proverbial robotic genie back in the bottle, even when the stakes are higher. I think we have seen this very recently in ride-share economies: prop 22 in California dramatically made Uber/Lyft’s tenuous position legal post-hoc (for more, see Technology Assessment Debates).
This is ultimately a pacing problem in technological innovation and legislative control. For many fields, such as human-robot interaction (HRI) the persistent sentiment is that policy holds back innovation, while Ryan Calo (more later), says that legislation must wait for the technology else it will be overly restrictive. An example given: there're laws in some states that limit how wheeled robots can be used for ground delivery and logistics publicly (example, starship robotics got approval), but when Boston Dynamic’s Spot came out it was not covered in the policy because it used a different locomotion mechanism (legs). This does seem like policy came too soon.
Techno-determinism vs Techno-libertarianism
Do you think the path for technological progress is predetermined and humans primarily influence when it happens? If so, you believe in techno-determinism. Others think that there is no way to know which technologies will emerge (I am closer to this side of things — no guarantees).
Collingridge’s Dilemma originated from a view wary of technological determinism and has grown into a frequent term among technology and society researchers. If we follow a deterministic technological lens, we think we know things will evolve and it’s best to regulate during certain phases. If not, one may not think regulation is ever useful (technological-libertarian so to say): let things play out. Going too far in either of these directions could be problematic — the former by restriction and the latter by over-adoption of dangerous technology and there is not a lot of room to undo in either case.
The different views of the future of robotics have big parts to play in how regulation is viewed. It seems to me that society has made up its mind that robots are inevitable (ignoring what that means for policy for now). This is interesting, considering the history of robotics being imagined from the unattainable in science fiction. There are some questions to ask about the future of robotics:
Is there anything guaranteeing robots to be more pervasive in 5 years?
Are there any rules blocking robots from being more effective in 5 years?
Having a look at robotics policy, I think the burden is on people to make robots work.
US Laws and Precedents on Robots
The history of precedents involving robots in society and robotic innovation is very interesting, and it is clear that there is some regulation in place that most don’t know of. Quotes from Ryan Calo 2016 Robots in American Law.
1947: Army fighter on autopilot (J.M. Spaight, Air Power and War Rights, (London: Longmans, Green and Co. Ud., 1947). This is the first case where autonomous agents appeared in legal commentary.
1950: Frye v. Baskin: this case set the precedent for some questions regarding “who is in control of the car?” Specifically, in this case, the owner of the vehicle was directing someone new driving, who ended up crashing, and the question was — who is responsible? This line of reasoning likely will continue with teleoperated cars a century later.
In the resulting suit by the father against his son’s friend, the court refused to find the defendant negligent as a matter of law. According to the court, plaintiff’s son John was really the driver. The defendant “controlled the car the same as if she had been a robot or an automaton. When John said ‘turn,’ she turned, mechanically.” She was merely “the instrumentality by which John drove the car.”Accordingly, “if it were negligence, it was John’s and not hers.” Or at least the jury was entitled to so hold.
1958: Louis Marx & Co. and Gehrig Hoban & Co., Inc. v. United States (and a series of follow on’s): this case determined if a moving robot represented an animate object to fit new robotic toys into tariff law for dolls. There’s a distinction between robots and robotic toys, which is interesting. This could have future implications on how robot manufacturers describe their products.
In other words, although a robot is a machine that simulates a person, a toy robot is only a simulation of the simulacrum. We are left to wonder how robotic a toy must be to itself qualify as a robot.
1989: Columbus-America Discovery v. The Unidentified, Wrecked, and Abandoned Vessel, S.S. Central America: Set precedent for if a teleoperate robot has the authority that a human would have — in disputing if a robot search vehicle can claim gold in wreckage.
The court fashioned a new test for effective possession through “telepossession,” consisting of four elements: (1) locating the wreckage, (2) real-time imaging, (3) placement of a robot near the wreckage with the ability to manipulate objects therein, and (4) intent to exercise control.
1993: White v. Samsung Samsung ran an ad with a “female-shaped robot … wearing a long gown, blonde wig, and large jewelry” that the Wheel of Fortune host Vanna White argued depicted herself without her permission — which she won and said that the company could not use an agent with White’s “likeness.”
In White…, courts are struggling instead with whether a robot version of a person can be said to represent that person in the way the law cares about.
These, of course, are but a few examples of legal decisions that can have impacts on the future of robotics. The cases set precedents on the scope of automation and who is responsible. You can find the original paper and an Atlantic article summarizing it. The regulation seems so separate from the problems with trying to implement robotics (and separate from digital media policy, which is currently very topical).
Where the law is at: drones
As someone who works with quadrotors and other small, mobile robots, I tended to think that regulation was actually stopping some applications nationally, but in reality it’s normally the technical challenge of making the perfect robot.
Flying drone policy has not changed substantially since 2012 when the FAA first started considering private drones in the airspace with the FAA Modernization and Reform Act of 2012 (setting up open skies by 2015). Here is a congressional summary on the “Integration of Drones into Domestic Airspace,” from 2013.
Take us to 2020, and drone delivery and commercialized micro-airspace still seem to be one-step away. A bunch of companies got special approval for domestic delivery last year, but it turns out it is mostly limited by technical challenges like flight time, stable control with uneven masses, and more. To paint a fuller picture, there are definitely still policy issues. Consider how police are using a tethered drone to skirt all current regulations on using flying drones for mobile sensing (needing drone pilot licenses). It doesn’t seem like laws are by any means limiting the application of new technology in this space, though.
Perpetual future of robotics
I think the history of robotics being heavily impacted by a lens of science fiction acts on a strong prior for practitioners. The goalposts always move forward. There’s a sense of what-if-ism that is not present in the legal side of proceedings.
The view of policy from the other side (law) from my techno-centric circles definitely made me see things a little differently, and I wanted to reflect on what this means for robotics culture (in industry and academia). Research in robotics feels like we are one breakthrough away from robots being awesome — but I think the metrics used in research don’t align with creating practical at-scale agents. Practical agents require robustness and most research is trying to demonstrate the feasibility of things for the first time. Trying to put cutting edge control and machine learning into a physical system changes the tangibility of the problem and the data, making it easier to follow what is happening. The embodied nature of it a) makes it harder to function and b) makes policy different.
Plenty of new companies emerge and fail in robotics — and it is not due to policy. It seems like we only notice the politics of automation-based companies where they get so big that they are displacing large existing structures (a la Uber). Then, why is there still a view of regulatory-wariness from engineers? I think it is spillover from the large digital-companies that are influencing society now (and likely should be regulated). Closing the sci-fi to reality gap will have more cases of this, but will likely be harder than some outlets think.
This post was heavily inspired by this podcast, featuring Ryan Calo at the University of Washington, Law, and by conversations with my colleagues at GEESE. I plan on returning to some more technical content soon, but I think the last few points have been getting somewhere. Cheers.
Hopefully, you find some of this interesting and fun. You can find more about the author here. Tweet at me @natolambert, reply here. I write to learn and converse. Forwarded this? Subscribe here.