Advanced Topics of Law & Technology
Technology has transformative power – and this is generally a power for good. To rein in new technologies’ potential, we must think if and how to regulate them: through industry-wide codes of conduct, other soft or hard law mechanisms, co-regulation or perhaps through code itself. We should think hard so as not to overregulate – lest we stifle innovation; but we should think harder not to underregulate – lest we lose our personal freedoms.
We invite different speakers, all from renowned international universities, to discuss with us relevant case law, the institutional legitimacy of the judiciary, administrative agencies and other supervisory (non)elected bodies within law and technology.
The talks are open to the public, but you need to register via innovationsrecht(at)uni-graz.at in order to receive the link to the meetings.
The Privatisation of Pre-emption in the Digital Age: A Critical Appraisal of EU Proposals on the Prevention of the Dissemination of Terrorist Content Online
Private actors are increasingly called upon to co-operate with law enforcement authorities in the fight against crime and terrorism. This is particularly the case in the digital age, where access to and regulation of persona data and digital behaviour of citizens is a priority for the state in developing a pre-emptive paradigm of security governance. The seminar will address this privatisation of pre-emption by evaluating critically EU proposals for a legal framework on the prevention of the dissemination of terrorist content online. By focusing on the changing nature of obligations imposed upon the private sector in this context, the seminar will explore the consequences of the privatisation of pre-emption on fundamental rights and the rule of law.
The Privatization of Punishment version 2.0: Criminal Records, Digital Technologies, and the New Punitive City
The talk will discuss a new facet of the privatization of punishment and its effects. While privatization represents a well-established phenomenon in modern criminal justice operations, less understood are the technological, market, and governmental forces that in recent years have dramatically reshaped the production, dissemination, and use of criminal record data. The focus will be on a reconceptualization of theories of penal entrepreneurialism that more directly addresses the role of technology and corporate interests in the field of criminal record management. A new paradigm (‘penal entrepreneurialism version 2.0’) will be utilized to describe and critically assess the new, multifaceted, and often problematic interactions between private actors autonomously collecting, commodifying, and variously using criminal record data, technological developments, and the criminal justice system.
About Alessandro Corda
Dr. Alessandro Corda is a Lecturer in the School of Law at Queen’s University Belfast (United Kingdom) and a Fellow at the Robina Institute of Criminal Law and Criminal Justice at the University of Minnesota Law School (United States). His research interests focus on criminal law, comparative criminal justice policy, sentencing and corrections, and the sociology of punishment. His research has been published in leading peer-reviewed journals including The British Journal of Criminology, Crime and Justice, The New Criminal Law Review, and Studies in Law, Politics, and Society.
New Laws of Robotics: Defending Human Expertise in the Age of AI
AI is poised to disrupt our work and our lives. We can harness these technologies rather than fall captive to them—but only through wise regulation.
Too many CEOs tell a simple story about the future of work: if a machine can do what you do, your job will be automated. They envision everyone from doctors to soldiers rendered superfluous by ever-more-powerful AI. They offer stark alternatives: make robots or be replaced by them.
Another story is possible. In virtually every walk of life, robotic systems can make labor more valuable, not less. Frank Pasquale tells the story of nurses, teachers, designers, and others who partner with technologists, rather than meekly serving as data sources for their computerized replacements. This cooperation reveals the kind of technological advance that could bring us all better health care, education, and more, while maintaining meaningful work. These partnerships also show how law and regulation can promote prosperity for all, rather than a zero-sum race of humans against machines.
How far should AI be entrusted to assume tasks once performed by humans? What is gained and lost when it does? What is the optimal mix of robotic and human interaction? New Laws of Robotics makes the case that policymakers must not allow corporations or engineers to answer these questions alone. The kind of automation we get—and who it benefits—will depend on myriad small decisions about how to develop AI. Pasquale proposes ways to democratize that decision making, rather than centralize it in unaccountable firms. Sober yet optimistic, New Laws of Roboticsoffers an inspiring vision of technological progress, in which human capacities and expertise are the irreplaceable center of an inclusive economy.
About Frank Pasquale
Frank Pasquale is a noted expert on the law of artificial intelligence (AI), algorithms, and machine learning. He is a prolific and nationally regarded scholar, whose work focuses on how information is used across a number of areas, including health law, commerce, and tech. His wide-ranging expertise encompasses the study of the rapidity of technological advances and the unintended consequences of the interaction of privacy law, intellectual property, and antitrust laws, as well as the power of private sector intermediaries to influence healthcare and education finance policy.
His book, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015), has been recognized internationally as a landmark study on how “Big Data” affects our lives. The Black Box Society develops a social theory of reputation, search, and finance, while promoting pragmatic reforms to improve the information economy. His other book, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press 2020), and a volume on AI he co-edited, The Oxford Handbook of Ethics of AI (Oxford University Press 2020), were both released in the year 2020.
Pasquale has advised business and government leaders in the health care, internet, and finance industries, including the U.S. Department of Health and Human Services, the U.S. House Judiciary and Energy & Commerce Committees, the Senate Banking Committee, the Federal Trade Commission, and directorates-general of the European Commission. He also has advised officials in Canada and the United Kingdom on law and technology policy. He presently chairs the Subcommittee on Privacy, Confidentiality, and Security, part of the National Committee on Vital and Health Statistics, where he is serving a four-year term.
He is one of the leaders of a global movement for “algorithmic accountability.” In media and communication studies, he has developed a comprehensive legal analysis of barriers to, and opportunities for, regulation of internet platforms. In privacy law and surveillance, his work is among the leading research on regulation of algorithmic ranking, scoring, and sorting systems, including credit scoring and threat scoring.
Frank Pasquale is an Affiliate Fellow at Yale University's Information Society Project and a member of the American Law Institute.
Professor Elisabeth Hödl: AI journalism and the role of the media in shaping public opinion
The Internet has profoundly changed the way people access and engage with news. In the early days of computer culture, the hope was for participation, democratization of access to knowledge, liberation from discrimination, and decentralization of power. Three decades later, the networked society presents a more differentiated picture: Content is not only generated by humans, but also by machines. Today, easy-to-use and affordable technologies are available that allow text, image and audiovisual content to be generated but also manipulated in an automated way. AI journalism and automated journalism are shaking the media industry and with it the question of responsibility for content. What is the role of a functioning press in the state and what are the implications of automated content for law and society?
Professor Lucas MacClure: Online Speech Regulation in Latin America
About Lucas MacClure
In addition to teaching in the International Studies at Boston College, Professor MacClure is a visiting adjunct professor of Law at Adolfo Ibáñez University. He holds a J.S.D. and an LL.M. from Yale Law School, where he specialized in comparative constitutional law and political science, and an LL.B. from University of Chile School of Law. He is a member of the Chilean bar association (Colegio de Abogados de Chile). His research and teaching interests include US and Latin American constitutional law, Internet law, and jurisprudence.
Here Come the Global Content Judges: Democratizing Online Speech Rules through Oversight Boards?
Private speech rules suffer from legitimacy problem which has become increasingly evident. Can platform councils and oversight boards help? Given the impact of platforms on democracy, how can we make platforms’ rules more democratic?
Algorithms and norms online platform use to conduct content governance substantially impact how we communicate online. But how are these private normative orders set and legitimated? Can they be contested? Recent European legislative efforts, like the DSA, point to more transparency and accountability duties. Platforms have to inform users why content is deleted. This is good, but it doesn’t help with the underlying legitimacy problem of setting and ruling on speech rules.
At least one major social network provider (Facebook) has prominently set up an alternative dispute settlement mechanism, an Oversight Board. At the other end of the normative process, TikTok has set up an Content Advisory Council. Does this make sense? Can this increase the legitimacy of the norms and the decisions taken to enforce them?
About Matthias C. Kettemann
PD Mag. Dr. Matthias C. Kettemann, LL.M. (Harvard), is senior researcher at the Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI) and head of its research programme on rule-making in online spaces.
A visiting professor for International Law at the University of Jena, Privatdozent at the University of Frankfurt and lecturer at the University of Graz, he is research group leader for Global Constitutionalism and the Internet at the Alexander von Humboldt Institute for Internet and Society (HIIG), Head of Section, International Law and the Internet, Max Planck Institute for Comparative Public Law and International Law, research group leader for Platform and Content Governance at the Sustainable Computing Lab of the Vienna University of Economics and Business and associated researcher at Germany’s Research Center Social Cohesion.
He has been a consultant for internet law and digital rights for a number of international organizations, including the Council of Europe and the Fundamental Rights Agency, national governments, ministries and legislatures, including the Federal Foreign Office, the Ministry of Economy and the German Bundestag, and international companies and foundations. His most recent books are Kettemann, The Normative Order of the Internet. A Theory of Rule and Regulation Online (OUP 2020), (as editor) Navigating Normative Orders. Interdisciplinary Perspectives (Campus 2020) and (with W. Benedek), Freedom of Expression and the Internet (2nd ed.) (Strasbourg, 2020).
Bottom-up Data Trusts: Disturbing the ‘One Size Fits All’ Approach to Data Governance
From the friends we make to the foods we like, via our shopping and sleeping habits, most aspects of our quotidian lives can now be turned into machine-readable data points. For those able to turn these data points into models predicting what we will do next, this data can be a source of wealth. For those keen to replace biased, fickle human decisions, this data—sometimes misleadingly—offers the promise of automated, increased accuracy. For those intent on modifying our behaviour, this data can help build a puppeteer’s strings. As we move from one way of framing data governance challenges to another, salient answers change accordingly. Just like the wealth redistribution way of framing those challenges tends to be met with a property-based, ‘it’s our data’ answer, when one frames the problem in terms of manipulation potential, dignity-based, human rights answers rightly prevail (via fairness and transparency-based answers to contestability concerns). Positive data-sharing aspirations tend to be raised within altogether different conversations from those aimed at addressing the above concerns. Professor Sylvia Delacroix's data trusts proposal challenges these boundaries.
About Sylvie Delacroix
Sylvie Delacroix is Professor in Law and Ethics at the University of Birmingham, coming from UCL. Prior to that Sylvie was the E.G. Davis Fellow at the Radcliffe Institute (Harvard), a lecturer in Kent and a post-doc in Trinity College, Cambridge. While in UCL Sylvie was the founding Director of the UCL Centre for Ethics and Law, as well as the UCL Virtual Environments and the Professions Group. Professor Delacroix's work has notably been funded by the Wellcome Trust, the NHS and the Leverhulme Trust, from whom she received the Leverhulme Prize.
Professor Delacroix focuses on the intersection between law and ethics, with a particular interest in data and machine Ethics. Her current research focuses on the design of computer systems meant for morally-loaded contexts. She is also considering the potential inherent in bottom-up Data Trusts as a way of reversing the current top-down, fire-brigade approach to data governance. See https://datatrusts.uk for an overview of data trusts a mechanism to address power imbalances between data-subjects and data-controllers.
Professor Delacroix has recently served on the Public Policy Commission on the use of algorithms in the justice system (Law Society of England and Wales). She is also a Fellow of the Alan Turing Institute and a Mozilla Fellow.
Universitätsstraße 15 / C 3 8010 Graz
Theresa UppertonLL.B.(Hons), B.A.