James GrimmelmannWhat is it that robots cannot do?

In 2015, robots not only build our automobiles, but they also can drive them. Robots vacuum floors and work alongside human beings in warehouses. And sleepless, tireless robots read everything they find on the web.

When they are found in factories, robots are potentially very, very dangerous because, of course, the machines are mindless and unconscious. In the view of at least one legal scholar, similarly oblivious text-reading machines may also be immune from copyright law. In a forthcoming article for the Iowa Law Review, James Grimmelmann asserts that humanity and copyright face a kind of marginalization in a world of literate robots. As he explained for CCC’s Chris Kenneally, “non-human reading” is also “non-expressive reading.”

“I borrowed the term from the scholar Matthew Sag, who was trying to find a way to explain a trend he saw in case law about fair use and computers and technology,” Grimmelmann says.

“The search engine aren’t human. [As they might say,] ‘We’re not really reading all these Web pages to enjoy them or appreciate their content. We are just indexing them. We’re making a map of the web, and a map is not a territory. We are not competing with the authors or the publishers of these works. We’re just, in fact, directing potential real readers to them.’

“You see these arguments in a bunch of different places, within copyright and across other bodies of law,” he continues. “There’s the sense that familiar human activities— reading, observing the world, moving around, driving—somehow, they don’t count if computers do them. It might have the same results, but because the computer is not itself human, there’s this intuitive sense that the law ought to leave them alone or not treat it as being the same in kind as when a person does it.”

James Grimmelmann is a professor of law at the University of Maryland, where he studies how laws regulating software affect freedom, wealth, and power.

Share This