Transcript: When Robots Become Authors and Publishers

Exploring AI Ethics

Recorded at STM 50th Anniversary Frankfurt Conference, 15 October 2019

with

•Marjorie Hlava, Access Inovvations
•Attorney Carlo Scollo Lavizzari
•Niels Peter Thomas, Springer Nature

For podcast release December 3, 2019

KENNEALLY: Public policy and legislation across the globe regulate and restrict technology in countless ways. Yet the laws of technology itself are few. And even among those, most are generally not well known.

Melvin Kranzberg, an American academic whose field was the history of technology, created just six laws of technology nearly 35 years ago. Kranzberg admitted that his laws were neither codes nor commandments, but truisms deriving from longtime immersion in the study of the development of technology. Kranzberg determined that technology comes in packages big and small and that invention is the mother of necessity. The most essential of Kranzberg’s laws is the first one – technology is neither good nor bad, nor is it neutral.

The emergence in our time of enormously powerful computing technology has moved the concept or artificial intelligence out of science fiction and into commercial and cultural fact. Currently, AI is building a strong presence in the scholarly publishing industry, where there are now many new AI-related initiatives, products, and services. These opportunities challenge us to consider the area of ethics as applied to AI uses within publishing. Organizations like STM and the industry at large should debate and eventually must agree to fundamental concepts and principles of conduct that will guide editors and executives to know right from wrong as these AI solutions proliferate.

Amara’s law, named after Roy Amara, a president of the Institute for the Future, captures concisely the predicament we face. We tend to overestimate the effect of technology in the short run and underestimate the effect in the long run. The time for balanced, thoughtful, industry-led initiative is now. With and without us, legislation will be written – it is being written – that will govern AI and its technology relations – machine learning and robotics.

I look forward to exploring these issues and challenges with my panel, and I want to introduce them, moving from my right first. I want to welcome Niels Peter Thomas. Welcome. As managing director for books, Niels Peter Thomas is responsible for Springer Nature’s portfolio of more than 300,000 academic book titles across all subject areas for imprints including Springer, Palgrave Macmillan, J.B. Metzler, and others. In addition, Niels has been appointed managing director of Springer Campus, which focuses on e-learning and distance learning programs. He’s also a university lecturer, covering topics including digitization in the publishing industry and the future of book publishing.

To his right is Marjorie Hlava. Margie, welcome. Marjorie Hlava is president, chairman, and founder of Access Innovations. Founded in 1978, the company provides information management services such as meta-tagging, thesaurus and taxonomy creation, semantic enrichment, and workflow consulting. Her research includes productivity of content creation, natural language processing, machine translations, and machine-aided indexing.

And finally, at the far end, a face familiar to everyone in STM is Carlo Scollo Lavizzari. Carlo, welcome. Carlo specializes in copyright law as an attorney based in Basel, Switzerland. He advises STM as well as individual publishers on copyright law, policy, and legal affairs. He’s admitted to practice in Switzerland, South Africa, England, and Wales.

Carlo, I’d like to open with you, because you have some important news to share with us. It’s an initiative underway around this particular topic by STM. And as much as things are in the news today, STM has been on this story for some time. And indeed, looking back to Future Lab, I believe in 2018, you identified that the future reader – or the reader of the future, rather – is a machine. That was the starting point for some of the work you are doing. So tell us about that.

LAVIZZARI: Thank you very much for this introduction. Indeed, I can recognize the Future Lab of STM and Eefke who has given a great panel this morning for recognizing this early on the horizon. I think STM actually has a history of identifying topics, including – I can only recommend the Future Lab to all of you to attend. And indeed, AI, I think by now – 2019-2020 – you cannot open any website or any newspaper or anything not AI is really the topic there. So already starting back in 2017-2018 at STM, we found that it is necessary to break this down a little bit and unpack and look, what does it mean for publishers?

So there is a steering group – a steering committee across the intellectual property committee that I’m mostly active in – the standards and technology that Eefke is spearheading and the public affairs group that David (inaudible) and Barbara Kalumenos assist with. And this group has said, OK, we have to realize that for the foreseeable future, it’s about data and quality data and whether publishers can be perceived as partners to make this data available, because the entire AI world is really a development now on big data and big computing power The algorithms have been around for 40-50 years. That’s not new. What’s really new is the capacity to harness data. And a further avalanche of data is coming towards us, in any case, that will swamp all discussions. At least that’s my prediction. Whether subscription or open access, it’ll be all about data then.

Coming back to AI and the role for publishers, in the steering committee, we’ve broken this down to four areas. How does AI help publishers to do a better job in house, so to speak – back office – even though back office doesn’t really exist anymore in a digital company? But second, how can AI help to make the content that publishers sell more valuable – or offer? And thirdly, are there specific skills in organizing content, in curating content, that publishers can actually extend? And AI may, in fact become the main thing the publishers of the future will do – and indeed, maybe the reader of the future is perhaps a machine.

KENNEALLY: Well, expand on that point, because this is an area that publishers already have some expertise in, right – that is, curating, organizing data, information. So this is a natural progression of that work.

LAVIZZARI: Correct. So data – in German, we have this word called data salad. It’s basically an assortment of, I suppose, a haystack that is unusable of data. So what you really need to do a successful AI project is organize the data, prepare it – I’m sure we will hear more about this – so that it can be used for machine learning, for reinforced learning, directed learning, for testing, for very fine – for debiasing all the things that need to happen so that an artificially intelligent entity that perceives the world effectively using data as their map of the world – so that they are not ending up with misguided results.

KENNEALLY: Right. And the issue for many legislatures – I know that in the US, in Congress, there’s been a bill proposed, and I believe it uses the word trustworthy in its title, and I think there’s a similar piece of legislation in the UK. And the emphasis there, again, is on trustworthy. Talk about the role that publishers can play in helping to address those concerns around trustworthy data and the danger if that’s not done.

LAVIZZARI: Yes, so I think I can almost make a link to the previous speaker about open access. The hard part is in fact again the culture. And for people to trust AI advances and AI tools to use, there needs to be transparency, needs to be clarity as to the provenance of the data used. If I’m being accepted or rejected as a patient in a hospital in a decision tool situation, I need to be able to trust this decision. Similarly, for a hiring situation, if the person making the triage is in fact the machine, I want to be sure there are no unfair biases based on historical data that this machine is going to use.

So I think publishers will have a great role to assure that the data used to train machines before they are let loose, so to speak, in the wild is correct and also to almost become escrow agents for the log entry, effectively, of the data that the machine generates while it is running.

KENNEALLY: OK. And indeed, there is another initiative – I believe the European Commission has a proposal around trustworthy AI. Can you give us any update on that?

LAVIZZARI: Yes, the EU and each member state of the EU has cottoned on to this trend of needing to have effectively a country or an EU policy on artificial intelligence. And there’s a high-level group of persons and various consultations taking place in Brussels that want to set the right framework around crosscutting themes of AI and identify areas that need attention. At the moment, I think data acquisition is very top of mind. Transparency, again, comes into it – the stability of models used.

And at the moment, I think legislators haven’t – we haven’t really come up against people really wanting to pass law on AI at this point, outlawing perhaps the use of this, as some countries do in the area of genetics and genetic research. But people are aware that some standards are necessary. And I think it’s the human trust factor in the end. Without the population generally believing that results through AI are good and acceptable, human-centric, there cannot be a successful country policy on AI.

KENNEALLY: Right. And you make a good comparison with the genetics industry – or profession, I should say. We’ve been using almost interchangeably ethics and law and legislation, and for many, they are more or less the same. But they are different, and important distinctions are there. Draw those out. Because as I understand it, ethics are guides and principles, and the legislation that comes are the sort of commandments of the society. Why is it important to develop, perhaps, some kind of ethical principles first, before that legislation should come?

LAVIZZARI: I think it’s not necessarily a question of first and before and after, but human activity and also technology is simply so multifaceted that laws alone will never be able to capture entirely the attitude, ultimately, that people have to professionalism, people have to put into developing high-quality artificial intelligence tools. It’s almost like the financial markets. You can have laws galore, but at the end of the day, the ethics of the financial advisors are secured through professional standards and ethical rules of those financial advisors. I think it’s the same here.

KENNEALLY: Well, Niels Peter Thomas at Springer Nature, you confronted these issues, particularly the ethical issues, head-on with this project that was announced last spring, a book published – an e-book published – I think I have the title right – Lithium-Ion Batteries, a Machine-generated Summary of Current Research. It was research that – or at least the project, I should say, was conducted in partnership with Frankfurt Goethe University, so one that’s very local as much as it is a global interest. And it was a project that took, as I understand it, something like 18 months from start to publication. Were you thinking about these ethical issues? When did you begin thinking about them? And what were some of the questions that came up in your meetings?

THOMAS: Actually, Carlo was referring to the machine as the reader of the book. And we tried to look at – from the other side – the machine as a creator of knowledge, the machine as an author, as a writer of books. So we actually did it – so half a year ago, we published this book, and it was actually an idea that was born exactly two years ago at Frankfurt Book Fair when I was standing together with colleagues thinking, is that actually possible? We thought it through, and we thought it takes at least one and a half years, and in the end, it was exactly the time we needed.

But for us, it was – from the beginning, it was very clear that this is only half a very important challenge in terms of technology. How do we master to really, as you said, bring the data together, let the machine learn, produce something really meaningful? Because our ambition was that the chemistry community should say this is useful to us. We need it. So we want to read it. It must be a human-readable machine-written book. So that was our ambition.

And we thought from the beginning, we want to be first, because we believe it should be a publisher who does it first and not a tech giant, because – exactly, we said we want to make very transparent from the beginning and say this is exactly what the machine can do right now. This is the state of technology as of today. We need to start a discussion on the ethical aspects of it and the responsibilities. So we made it very transparent in the book what the limitations of the technology are. I mean, this is our first attempt, and the community gave us very feedback to it. But it’s far from being perfect.

And now, we need to really think about this other half of the challenge, which is really a legal challenge – what happens if somebody infringes the copyright of this book? Whose rights actually are violated? These are very new questions that we have to ask. But then, also, who takes the responsibility? What if the machine summarizes something where we say this is inappropriate? How do we do it? But also very practical issues of what we know is in our industry the standard, like peer review. We can’t go back to the machine and say please redo chapter three, we don’t like it right now. It will be exactly the same outcome.

So we need to really negotiate new standards, and I believe we need to do this not as a single publisher, but we need a new idea in the whole industry. So this is why I very much welcome the initiative here to bring this together, because I think these ethical and these responsibility standards that we need to develop are at least as big a challenge as the technological challenge is and was.

KENNEALLY: Right. It raises so many questions. One, though, is fundamental, which is what does it mean to be a publisher? The idea or the notion that one can be a publisher by pressing a button seems to be somehow contrary to what we all feel.

THOMAS: Yeah, so I think as the managing director for books for Springer Nature, it’s a very compelling idea. If we don’t need any authors anymore, maybe we don’t need any editors anymore if we can do it all by the machine. But that, of course, will not happen.

So this is – as we very transparently say in the subtitle, it’s a machine-generated summary of existing – of current research. So we are producing a new perspective, but we are not producing the – in a technical term, we are not producing new knowledge. But for readers who have no time to read all the 5,000 to 10,000 articles that are somehow summarized in such a book, if you don’t have that much time – and most of us haven’t – then it creates a new perspective. So it is kind of a new view and a bias-free summary of existing research, which is then almost, by the push of a button, to be created.

And that gives us a completely new role. That’s true. And we have to think about do we want this role? Can we take this role? Should we? Or should we involve more experts, more authors to check it, to correct it? Should it be our responsibility? Is it the responsibility of a review of the community? I think these are very, very important questions, but there is no final answer to them yet.

KENNEALLY: Right. And player pianos and phonograph records didn’t do away with concert pianists, either, so there is an example for us of where they add to – they are supplemental. And to the point about it, perhaps we need to tell people a bit more about the book. It’s a collection of work. It has gathered information on, I believe, thousands of articles. Is that right? Tell us more.

THOMAS: Well, I cannot go too deep into the technology, because in the end, it’s indeed quite complicated. But the basic idea is that we show to a pipeline of algorithms everything that we have published in a certain discipline in the last, let’s say, five years or so. In this case, it was everything about lithium-ion batteries, everything about electrochemistry. So there are between 5,000 and 10,000 articles, book chapters, database entries, and so on that we showed to the algorithm. Then the algorithm clusters them and says there are certain areas where there is more published knowledge about, certain areas where we don’t know much about it.

And then after this clustering, there is one human interaction. This is basically a figure that we give to the algorithm. We want to see this book in 250 pages, and we want four chapters. That’s what humans need to say. And then the algorithm creates a table of content by selecting the appropriate clusters and bringing them together to superclusters in order to say, what do we want to bring here together? So there is one chapter about anodes, one about cathodes, one about data models, and so on. So this is all done by the algorithm.

And then the algorithm looks at what is published in this cluster, looks at certain measures – what is important, what is less important, where do we have more knowledge according to specific parameters, what seems to be more important to the community and whatnot, and then extracts the most important facts and text parts of the original works – of course, properly cited, so my legal colleagues assured me that we are not infringing copyright here. But then it is all summarized by the machine.

And then there is some semantic parsing and some – and we then changed the size of the text. So we bring it together so that in the end, you have a text with some direct quotes, but mostly text or sentences that are not in the original works but are combined knowledge of the different sources of the content that we showed to the algorithm in the first place.

KENNEALLY: So you’ve published the book. What have the reviews been? Have any of the reactions surprised you? And did any of the reviewers – the critics – tell you you got some things wrong?

THOMAS: I mean, in the general media, the worst thing that we read about the book was it’s a boring read. (laughter) But it’s a book about chemistry, so, well, I mean, maybe this is the source of it.

KENNEALLY: Jim Milne can argue with you about that.

THOMAS: (laughter) The chemistry community said that for some people in my community, this is very useful, because especially when you are new to the field or when you are an expert in some close field but not exactly in this area, and you want to get a really good overview, but you don’t want to be biased by a certain school of thought, you get here an account of a lot of things in a very dense – in a dense way.

KENNEALLY: But I believe a librarian came to you and raised some alarm bells – or raised some red flags, I should say – regarding the potential here for abuse. Just tell us about that.

THOMAS: When we published it, it was really our intention to start a discussion on the consequences and to start a discussion on how do we control this technology in the end? Since then, we are discussing it with lots of librarians. And indeed, one librarian came to me and said it’s really very fascinating. It’s very interesting. I support the idea that this is a good idea to experiment with. But you can abuse this technology and somehow mask a plagiarized text, so you could indeed – you could machine-generate a text and then hand it in as your PhD thesis or something like that with only minor modifications. So there are possibilities. There are always possibilities to somehow misuse a technology.

So I think the question for me is here that even by using the technology in the intended way, we have to be very careful. But there is also this even more – potentially more dangerous aspect that the technology itself can be used for something that we don’t want it to be used.

KENNEALLY: Right. Marjorie Hlava, that’s a good point to bring you into this discussion. You’ve been involved in many of the sort of adjacent areas of artificial intelligence. And again, rather like law and ethics, we’re using artificial intelligence interchangeably with things like machine learning and robotics and so forth. That’s for another discussion.

But you have, over time, learned how these taxonomies are built and why there is a need for care at the very beginning of all of this. And you believe there is a great amount of education that must be done for people to understand better what lies at the foundation of all the work, such as what Niels Peter Thomas was just describing for us. Tell us about that.

HLAVA: Well, it’s an interesting conundrum that we face, because a book like the lithium-ion batteries is an extension of some of these programs like SciGen that automatically create technical articles. And they can be detected if you reverse-engineer their algorithms, but they’re still out there. So it’s a matter of the gradient – the bias that’s implied in anybody’s body of information. I mean, all publishers publish within a topical area. They don’t normally publish across the waterfront. So everything that they publish really has a bias to start with, and we need to be careful about taking that collection or acquisitions policy as a policy versus a bias of the information that they collect.

It’s true that there are a ton of algorithms that we’re calling artificial intelligence, and they’re – to me, they divide into two major areas. One of them is the statistical vector-based things that depend on a training set, and the others are things that are the absolutes. They’re based on some kind of nomenclature, vocabulary, and so on. And to commingle the two gives you the best result. But it also means that we have to be very careful of the content bias. If we take everything that was ever published, as Carlo says, then we aren’t taking advantage of the most recent knowledge, because we have the entire backfile. And if that’s the case, there’s some early stuff that’s been superseded, is no longer thought true, but because it was the big thing at the time, there’s a lot published on it, and it’s going to bias the algorithms going forward. So yes, there’s a lot of human care that needs to go into it in order for us to make sure that we’re giving that basic orientation of what we know.

I’m trying really hard to not use the word truth, (laughter) because as I mentioned earlier in the day, I think that’s a hard concept for a lot of people. But what is the baseline? What is the pile of stuff that we really know and we’ve established and we’ve verified and is factual versus all the stuff that surrounds it?

KENNEALLY: We’re coming to a point, aren’t we, where we’re going to need filters for the filters, it seems to me, and you had worked on one of a kind like that with PLOS. Tell us about that.

HLAVA: Yeah, we have – well, actually, for more than one publisher. But in the case of the PLOS filters, what we built was many-faceted. We did look for plagiarized and programmatically generated papers, but we also looked for things that we called suspect science, where it’s stuff you really don’t want to publish, and you know you don’t want to publish it. So instead of going to all the expense of feeding it to the editors and the peer reviewers, you can clear it out very early. So it’s things like hate and pornography and religion and things that have been proved not to be the case, like vaccines and autism, for example. It might be a perfectly good paper, but it needs to get an extra look before you send it through the full pipeline. So it saves a lot of money, saves a lot of time to filter at the beginning, before you ever put those things in the full production pipeline.

KENNEALLY: And as we discuss this question of ethics in artificial intelligence and algorithms, where do you think we need to emphasize right now? Is it through education? I’d mentioned earlier that you had told me that that’s where you think perhaps we don’t need standards just yet – we don’t need to go that far. We just merely need to begin to talk about it among the community here, but also in the world at large.

HLAVA: Well, there’s two parts to that answer. One is, yes, I think you can standardize too early, and then you stifle innovation. So I don’t think we’re at that point yet, but yes, I think a great deal of education is needed. I mean, I go to lots of publishing meetings, as lots of people here do, and the concept of artificial intelligence – I say, what do you mean by that? I mean, which of those 23 algorithms that I use in my own software do you consider to be artificial intelligence? An awful lot of them are just a single answer. Watson is a big question-answer dictionary look-up system. It’s not much beyond parsing the language and then looking at a huge dictionary to get the answers. Others are a vector-based statistical algorithm that can give you sometimes really the wrong answer depending on which noun you put first in your request, so it’s a hard thing to go through. And I think I mentioned to you some of the work that has been done in the popular media which really, really concerns me. Maybe that’s what you were getting at, Chris.

KENNEALLY: Well, sure. Well, expand on that. So what have you seen that concerns you? And to your point about what gets labeled AI that may not be, I think there’s a term for it – AI-washing, rather like greenwashing, which is the way to make something that might not be particularly good for the environment look good for the environment. People are actively trying to make things look like they really have an AI power.

HLAVA: Yeah, because if you go back, if you look at the same piece of technology that hasn’t changed one whit, three years ago, it was called something entirely different. But now, it’s branded as AI. And I go, oh, man, could we just take a break with the labels, please? (laughter) But one of the things I mentioned to you earlier –

KENNEALLY: Public concern.

HLAVA: Oh, go ahead.

KENNEALLY: Yeah. Oh, no, no, I was going to say – I wanted to bring you back to your point. You want to take this beyond this room and into the public arena and the concerns there.

HLAVA: I do. I think anybody who reads the press knows about all these – ooh, there was Russian interference. No, there was Ukrainian interference. No, there’s somebody else’s interference in the elections and so on. But it goes back quite a ways. So if you look at – like Eli Pariser talked about – and he was president of MoveOn, which was a group that supported heavily the Obama election – and he talked in detail in a TED talk about how you can bias what people think by what you post. Particularly Facebook is very persuasive. And there’s now tribes of people who only listen to the things they want to listen to.

But he described how it works, and the way it works is personalization. So your personalization on Google or on Bing or on Facebook – they’re presenting to you the things that you’ve clicked on or you’ve liked in the past. That means that people are only seeing what they like – what they’re comfortable with. It’s the same as people that go into a store and they’re most comfortable with this store, because its competitor store arranges the information differently, and they’re used to one arrangement and they’re not comfortable with the new arrangement.

In the case of biasing information that’s presented to you, if you’re only seeing the groupthink of the people that you already like and are comfortable with, you’re never going to hear the other point of view. And I think that’s causing some incredible silos that we aren’t even aware of in our thinking.

KENNEALLY: Well, Access Innovations and your own work goes back to some of the early days of NASA and the space missions. At the time, there was a fear of computer – the supercomputer loomed, and science fiction took it on – 2001: A Space Odyssey being the obvious example. That seemed to subside for a great deal of time – for a generation, really – but it almost is returning now. I wonder how you feel about seeing this fear of computing and fear of artificial intelligence return to public discussion.

HLAVA: Well, I think Hal is still with us. (laughter) Just have to tell you. I was reading on the plane on the way here a Clive Cussler novel about artificial intelligence. And the theme is that it takes over the world, and the bad guys are going to run all our groupthink. And I buy into a lot of how groupthink is affected. I would guess that 95% of the people in this room have a really similar way of thinking, and they probably vote in a really similar way. It’s just it’s the group you’re comfortable with. So when we talk about diversity of opinion, we don’t have that here.

But the way that they disabled, in this novel, the artificial intelligence was they knocked out all digital communication worldwide. Therefore, the computer could not trace what you were thinking and who you were talking to. Took us all back to the Stone Age, which was – well, not the Stone Age, but the ’50s, before –

KENNEALLY: (laughter) There are some who might not –

HLAVA: The same thing.

KENNEALLY: – they might not remember it quite that way. Well, you know, at one point, Niels Peter Thomas, you used the word entity to describe this project. And I want to, before we go to questions from the audience, bring up something that is obviously important to me at Copyright Clearance Center, which is the very interesting discussion about copyright. You alluded to it. Who is the copyright holder for this book? And perhaps moving in the future, we would not necessarily automatically assign it to Springer, because if some entity were to exist, it would hold the copyright. What’s your position on that question? There’s a number of ways of tackling it – whether or not these algorithms or artificial intelligence solutions are thinking, have intention, are expressing themselves. Tell us about that.

THOMAS: I think we’ll see a lot of progress in this question over the next couple of years. But I should say I’m not a legal scholar, so maybe this is more a question for you here.

KENNEALLY: I’m going to turn to Carlo in just –

THOMAS: But I think that one day, in a specific situation, you will not be able to tell if the author was a machine or a human author if you’re not making it transparent. And then I could say, OK, where’s then the difference between a human author and a machine if the reader cannot spot the difference?

I know that currently in many legislations, it’s organized in a different way. And I think here, we need more standardization over time. Currently, it’s quite complicated for us, because in some countries you have different legal situations than in others. But in principle, it will be a very tough situation.

Finally, also because of the question that I mentioned earlier of responsibility – I mean, it doesn’t help us if we say, OK, it gets some legal rights as a book written by an author. There must be somebody who is accountable for it.

KENNEALLY: And this is a question that comes up with driverless cars. You know, if there’s an accident, who’s responsible?

THOMAS: Right. I should say that – I mean, we will continue to publish a few more titles in this technology, and we’ll continue to improve it, and we’ll continue to develop it. We don’t have the plan really to publish hundreds of books just like that. That would make no sense.

But I see a huge – and there is a parallel development actually to autonomous driving there – I see a huge possibility also to deal with all these ethical issues of, in the end, human machine-aided or combined works. So I think there is a huge possibility to show a sort of a manuscript to an author who wants to write about a certain subject area that is machine-generated. But then the author can choose to take some aspects of it is simply – it’s simply easier for him than to write a book. But the author will ultimately take the responsibility for it. He doesn’t – the author might not have to look up for interesting quotes or might not make his mind about a possible table of content. So there is a lot of rather dull work in the life of an author that can be done by the computer. But ultimately, the author will then take the responsibility for it. I see a huge business value in it. And I see also part of the problems that we are now discussing being solved – or at least there is a perspective to solving it.

KENNEALLY: So there’ll be a list of coauthors, and one will be 9,000, comma, Hal.

THOMAS: That’s exactly (inaudible).

KENNEALLY: I wonder, Carlo, if you could pick up the legal side of that question. If we are going to move from functioning machines to thinking machines – and I suppose that’s the way to look at the difference – today, where they are simply functioning, at some point in this future that Niels Peter Thomas describes that they may begin to think, what will that mean for copyright?

LAVIZZARI: Yeah, I think the way of machines that think – just to give an example with the artificial intelligence entity that beat the Go player recently. If you said to that entity, now lose the next game, it would have no concept of what losing or winning is, and it could not let the human win. So we’re way off that. But coming back to the interesting question of protectability by IP of AI-generated new information, original information –

KENNEALLY: The ownership, the protectability that flows with that.

LAVIZZARI: The ownership. In English law, computer-generated works are protected, so they might argue that that is already, at least in England, protected. The insurance people and the tort lawyers – they are the ones driving the conversation at the moment. They’re saying any AI entity needs to become a legal entity. Otherwise, the risk is not manageable. But I think the industries that have high R&D have woken up to the tort and insurance lawyers, who are now trying to corner the market and effectively create a whole insurance business model behind AI. And they are saying, no, we’re going to create data trusts.

For instance, IBM and Mastercard have created a data-trust in Ireland that is generating information that is produced in GDPR privacy legislation-compliant ways, and the entity responsible for complying with GDPR and for accruing any of these rights and data is that trust. So I think we may have, for each little book, maybe a little trust if whatever is written in the book is very risky or involves the generation of new data.

KENNEALLY: Or it could simply be a subject that is critical to life and death – cancer research or something like that.

LAVIZZARI: Yes. And I think in music – if you look to a practical transaction lawyer – if you have a composer of a certain age, then you might suggest that a younger arranger could join that composer perhaps, because life of copyright is calculated by 70 years after the death of one of the authors, and it is the younger of the two. So lawyers have been adding additional authors for reasons like that or suggested that they should have a hand in the original work that is being created. So I would suggest if you wanted to expand your series of AI books that you also invite some authors for a modicum of creativity so that you get the protection.

KENNEALLY: Yeah. Well, you know, it seems to me that our response to artificial intelligence is directly related to how we feel about the future. We may consider AI intellectually, yet it seems that we inevitably react emotionally, too. When it comes to AI, we are either pessimists, optimists, or skeptics. In the excitement over artificial intelligence, it’s possible to see ahead a cyber-utopia, a cyber-apocalypse, or maybe you even think of it as all just a lot of cyber BS. So as a final question to each of you, where would you put yourselves in that cyberspace spectrum – optimistic, pessimistic, skeptical? Carlo?

LAVIZZARI: I think I’m sort of a realist, I guess. (laughter) For me –

KENNEALLY: That’s a fudge.

LAVIZZARI: Yeah, so spontaneously, I consider –

HLAVA: He’s the lawyer.

LAVIZZARI: – a glass of water half full and not half empty. But then, for instance, a zero embargo is simply no embargo, right? There’s no half full, half empty there. So I would say we haven’t really dealt on this panel with all the mischief that can be done – sort of adverse examples – how the technology can be distorted and abused. Again, I’m very much an optimist that AI will improve people’s lives enormously, and publishers can really help to make that work. Can technology be abused? Yes.

KENNEALLY: Margie, pessimist, optimist, realistic?

HLAVA: So I think I’m neutral. I like – I love – the automation part of AI, what it can do as a human assist, what it can do to raise the level of all of us, raise the level of humanity worldwide. I think it can be used to do away with the clerical aspects of a great deal of what we do, leaving us time for more intellectual activity.

And I agree with Carlo that there’s a great deal of mischief that can be caused. So when I look at the last American election and the data, information, that was fed to people based on their profiles and I look forward to the next election, I cringe, because the amount of mischief that I think will be coming from all sides is just draconian. I don’t think we’ll be able to trust very much of anything. And it scares me.

KENNEALLY: Yeah, that lack of trust is very scary. Niels Peter Thomas, I think you entered your project almost two years ago with some degree of skepticism. Have you moved away from that? You still have questions – or how would you feel about it?

THOMAS: Well, I have tons of questions, and I share your concerns. But I’m an optimist. As long as we have sessions like this, I think – as long as we are transparent about it, as long as we care about these issues – I think we can solve them, so this is why it makes me an optimist.

KENNEALLY: Well, thank you. I really appreciate the discussion we’ve had with Carlo Scollo Lavizzari, Marjorie Hlava, and Niels Peter Thomas. Thank you very much.

(applause)

Share This