<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <atom:link href="https://www.beyond-eve.com/technialarticles/rss" rel="self" type="application/rss+xml" />
        <title><![CDATA[Beyond EVE: Events]]></title>
        <link><![CDATA[https://www.beyond-eve.com/technialarticles/rss]]></link>
        <description><![CDATA[]]></description>
        <language>de-DE</language>
        <pubDate>Sun, 22 Feb 2026 13:12:37 +0100</pubDate>

                    <item>
                <title><![CDATA[The State of AI Regulation Across the Globe]]></title>
                <link>https://www.beyond-eve.com/en/events/the-state-of-ai-regulation-across-the-globe</link>
                <description><![CDATA[<p>Artificial intelligence remains in the regulatory hot-seat, with governments racing to develop new and refined regulations and protections for AI. While the EU AI Act is the first mover in the space and the Brussels effect is in play, countries across the globe weigh the key issues of risks, rights, and economic opportunity against the needs of their region. In this learning call, panelists explore different approaches to AI regulation from policy experts from Africa, Latin America, North America, and Europe.&nbsp;Panelists discuss how national priorities are implemented in different regions, how the EU AI Act has a global impact, how Rwanda and Brazil serve as new models of AI regulation, and how governments, such as the US, take sectoral, legislative, and regulatory bodies approaches to AI governance.</p><p><br></p><p><strong>Speakers</strong></p><p><a href="https://www.issa.org/speaker/ridwan-oloyede/" rel="noopener noreferrer" target="_blank">Ridwan Oloyede</a>&nbsp;is the assistant director for the professional development workflow at Certa Foundation’s Center for Law and Innovation. He most recently co-authored Certa Foundation’s report on the state of AI regulation in Africa and analyzed Rwanda’s unique approach to AI regulation as the first mover in Africa. Previously Ridwan Oloyede co-founded Tech Hive Advisor, PrivacyLensAfrica, and Privacy Bar &amp; Bants. He has been designated as an expert at the Council of Europe’s Data Protection Unit.&nbsp;</p><p><a href="https://www.direito.uerj.br/teacher/carlos-affonso-de-souza/" rel="noopener noreferrer" target="_blank">Carlos Affonzo De Souza</a> is director of the Institute of Technology and Society of Rio de Janeiro and a professor of law at Rio de Janeiro State University and the University of Ottawa Law School. Carlos De Souza is an Affiliated Fellow at Yale Law School’s Information and Society Project. He is a member of the Executive Committee of the Global Network of Internet &amp; Society Research Centers, and his recent work has focused on Brazil’s efforts to regulate AI ahead of November’s G20 meeting and the emerging rights-based approach in that regulation.</p><p><a href="https://hls.harvard.edu/faculty/mason-kortz/" rel="noopener noreferrer" target="_blank">Mason Kortz</a> is a clinical instructor at Harvard Law School’s Cyberlaw Clinic at the Berkman Klein Center for Internet &amp; Society. He brings his legal training and background as a software and database developer to his work at the Cyberlaw Clinic. His recent research focuses on the law of artificial intelligence and algorithms, and he has written and presented on how algorithmic decision-making has impacted intellectual property rights, product liability requirements, and the criminal legal system.</p><p><a href="https://connection.mit.edu/gabriele-mazzini" rel="noopener noreferrer" target="_blank">Gabriele Mazzini</a> is the architect and lead author of the EU AI Act by the European Commission, where he has focused on technology law and policy for the past seven years. Prior to joining the European Commission, Gabriele served in the European Parliament and the Court of Justice and was Associate General Counsel at the Millennium Villages Project, an international development initiative across several sub-Saharan countries. Gabriele Mazzini is a Connection Science Fellow at the Massachusetts Institute of Technology.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sun, 30 Jun 2024 16:57:19 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Globalizing the European Media Order: The Brussels Effect in Times of Crisis]]></title>
                <link>https://www.beyond-eve.com/en/events/globalizing-the-european-media-order-the-brussels-effect-in-times-of-crisis</link>
                <description><![CDATA[<p>What are the challenges of finding consensus in a diverse Europe and the potential of Europe as a global regulator of the digital field? Renate Nikolay is a key voice in the development of European digital regulation, with a deep experience in data protection, hate speech, and disinformation. She will talk with Prof. Dr. Wolfgang Schulz about the challenges of finding consensus in a diverse Europe and, the potential of Europe as a global regulator of the digital field. Together they will enquire about Europe’s regulatory approaches anchored sufficiently in scientific insights into platforms, regulation, and whether the “Brussels effect” of digital rules underlines the importance of European digital law.</p><p>The event will be held in English and moderated by <a href="https://www.hans-bredow-institut.de/en/staff/matthias-c-kettemann" rel="noopener noreferrer" target="_blank">Prof. Dr. Matthias C. Kettemann</a>.</p><p><br></p><p><strong>Renate Nikolay</strong> is Head of Cabinet of Vĕra Jourová, Vice-President for Values and Transparency, working on matters such as rule of law or disinformation. She was Director in charge of Asia and Latin America in DG Trade and, from 2014 to 2019, she was Head of Cabinet of the Commissioner for Justice, Consumers and Gender Equality where she played a key role in the adoption of the data protection reform and the establishment of the European Public Prosecutor and the Code of Conduct with platforms on online hate speech. She holds a law degree (erstes und zweites Staatsexamen) from the Free University of Berlin.</p><p><strong>Wolfgang Schulz </strong>is Director of the Leibniz Institute for Media Research │ Hans Bredow Institute (HBI) and holds the university professorship “Media Law and Public Law including its Theoretical Foundations” at the Faculty of Law of the University of Hamburg. Since February 2012, he has been Research Director of the Alexander von Humboldt Institute for Internet and Society (HIIG). His work emphasizes the freedom of communication, problems of legal regulation with regard to media contents, questions of law in new media, above all in digital television, and the legal bases of journalism, but also the jurisprudential bases of freedom of communication and the implications of the changing public sphere on the law.</p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Sun, 24 Apr 2022 12:04:59 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Kant's Tribunal of Reason. Legal Metaphor and Normativity in the Critique of Pure Reason]]></title>
                <link>https://www.beyond-eve.com/en/events/kants-tribunal-of-reason-legal-metaphor-and-normativity-in-the-critique-of-pure-reason</link>
                <description><![CDATA[<p>Kant's Critique of Pure Reason, his main work of theoretical philosophy, frequently uses metaphors from the law. In this first book-length study in English of Kant's legal metaphors and their role in the first Critique, Sofie Møller shows that they are central to Kant's account of reason. Through an analysis of the legal metaphors in their entirety, she demonstrates that Kant conceives of reason as having a structure mirroring that of a legal system in a natural right framework. Her study shows that Kant's aim is to make cognizers become similar to authorized judges within such a system, by proving the legitimacy of the laws and the conditions under which valid judgments can be pronounced. These elements consolidate her conclusion that reason's systematicity is legal systematicity.</p><p>With<strong> Rainer Forst</strong> (Normative Orders, Goethe University), <strong>Jakob Huber</strong> (Normative Orders, Goethe University), <strong>Sofie Møller</strong> (Normative Orders, Goethe University),<strong> Susan Shell</strong> (Boston College), <strong>Martin Sticker</strong> (University of Bristol), <strong>Marcus Willaschek</strong> (Normative Orders, Goethe University)</p><p>Moderated by <strong>Lara Scaglia</strong> (University of Warsaw)</p><p>Organised by <strong>Sofie Møller</strong> (Author)</p><p>For further information about the book: <a href="https://www.cambridge.org/core/books/kants-tribunal-of-reason/BF13AA937F273044ECA357F89C30E3C4#fndtn-information" rel="noopener noreferrer" target="_blank">Click here... </a></p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Tue, 12 Jan 2021 22:18:14 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AI and Content Moderation]]></title>
                <link>https://www.beyond-eve.com/en/events/ai-and-content-moderation</link>
                <description><![CDATA[<p>Public pressure on platform companies to more soundly monitor the content on their sites is constantly increasing. To address this, platforms are turning to algorithmic content moderation systems. These systems prioritize content that promises to increase engagement and block content that is deemed illegal or is infringing the platform's own policies and guidelines. But content moderation is a ‘wicked problem’ that raises many questions all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate and deploy this on a global scale? Are platforms overblocking legitimate content, or are they rather failing to limit illegal speech on their sites?&nbsp;</p><p>Within the framework of a ten-week virtual research sprint hosted by the HIIG, thirteen international researchers from various disciplines came together to tackle the challenges posed by automation in content moderation. Their work resulted in policy briefings focused on algorithmic audits and on increasing the transparency and accountability of automated content moderation systems. We warmly invite you to learn more about their findings and attend their output presentations followed by a panel discussion.</p><h4><strong>Agenda</strong></h4><p>Opening remarks on the project and the research sprint by research director Wolfgang Schulz and research lead Alexander Pirang</p><p>Presentations of the research outputs by the sprint fellows:</p><p><br></p><ul><li><strong>David Morar,</strong> guest researcher at <a href="https://datagovhub.elliott.gwu.edu/staff/" rel="noopener noreferrer" target="_blank">George Washington University</a>, Elliott School of International Affairs, USA</li><li><strong>Aline Iramina,</strong> PhD candidate at the <a href="https://www.gla.ac.uk/" rel="noopener noreferrer" target="_blank">University of Glasgow</a>, Great Britain</li><li><strong>Sunimal Mendis, </strong>lecturer at the <a href="https://research.tilburguniversity.edu/en/persons/sunimal-mendis" rel="noopener noreferrer" target="_blank">University of Tilburg</a>, Netherlands</li></ul><p>Followed by a panel discussion moderated by Jennifer Boone with:</p><ul><li><strong>Angelica Fernandez</strong>, fellow of the research sprint and PhD candidate at the University of Luxembourg</li><li><strong>Philipp Otto</strong>, founder and director of the iRights.lab</li><li><strong>Matthias Kettemann</strong>, associated researcher at the HIIG and scientific lead of the research project ”Regulatory Structures and the Emergence of Rules in Online Spaces” at the Leibniz-Institut für Medienforschung I Hans-Bredow Institut&nbsp;</li></ul>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Sun, 22 Feb 2026 13:12:37 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Freedom to Deviate in the Algorithmic Society?]]></title>
                <link>https://www.beyond-eve.com/en/events/the-freedom-to-deviate-in-the-algorithmic-society</link>
                <description><![CDATA[<p><strong>Lucia Zedner</strong> (Oxford, All Souls College, Professor of Criminal Justice)</p><p><strong>Bernard Harcourt</strong> (Columbia Law School, Professor of Law and of Political Science)</p><p><strong>Frank Pasquale</strong> (Brooklyn Law School, Professor of Law)</p><p><strong>Christoph Burchard</strong> (Goethe University, Professor of Criminal Justice etc.)</p><p><strong>Indra Spiecker gen. Döhmann</strong> (Goethe University, Professor of Public Law etc.)</p><p><strong>Jürgen Kaube</strong> (Co-Editor at Large, Frankfurter Allgemeine Zeitung)</p><p>Algorithms – and the actors behind them – are surveying and impacting ever more dimensions of our modern lives. They recommend which movies to watch; they calculate risk appropriate credit scores; and they play a role in meting out “just” punishment; to only name a few areas. At the same time, they correct imperfect human decisions and add new informational dimensions to decisions prior&nbsp;impossible. To assess and evaluate the impeding transformations of normative orders in a predictive society, we approach algorithms in light of the juxtaposition of trust and control. Why and under which conditions do – or don’t – we trust algorithms? Indeed, can and should we trust them? Especially because their algorithmic normativity was (not) produced in justificatory fora where trust is brought about in and through social conflicts? But then, how much trust – if any – should algorithms put into us as citizens? For example, do they have to presume us non-dangerous and harmless? Vice versa, how much control do we need to retain over algorithms? And how much control should they exert over us? Can we use algorithms to control the effect of algorithms and thus create a meta-level of trust? Especially in order to negate, or as a matter of fact: to entertain, the freedom to deviate in the algorithmic society? These are but a few of the questions that internationally renowned speakers raise in “Algorithms between Trust and Control”, a lecture series convened by Indra Spiecker gen. Döhmann and Christoph Burchard, and co-organized by the research clusters ConTrust, Normative Orders and ZEVEDI in the line of the Frankfurt Talks on Information Law and under the auspices of Goethe University Frankfurt am Main.</p><p>The lectures will take place via Zoom. Please register to receive the login data.</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Tue, 27 Apr 2021 19:47:44 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[In AI We Trust. Power, Illusion and Control of Predictive Algorithms]]></title>
                <link>https://www.beyond-eve.com/en/events/in-ai-we-trust-power-illusion-and-control-of-predictive-algorithms</link>
                <description><![CDATA[<p>The inaugural Yehuda Elkana Fellow, Helga Nowotny, gave a lecture at the Central European University, in cooperation with the IWM and the Hannah Arendt Center for Politics and Humanities at Bard College.&nbsp;The lecture was preceded by a ceremony to commemorate Yehuda Elkana.</p><p>As we move into a world in which algorithms, robots, and avatars play an ever-increasing role, we need to better understand the nature of AI and its implications for human agency. Helga Nowotny argues that at the heart of our trust in AI lies a paradox: we leverage AI to increase control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future.</p><p>These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.</p><p><a href="https://www.iwm.at/fellow/helga-nowotny" rel="noopener noreferrer" target="_blank">Helga Nowotny</a> is one of the most prominent scholars in science studies worldwide, an area that counted Yehuda Elkana as one of its pioneers and promoters. For several decades Helga Nowotny has been one of the most influential institution builders in European higher education and research. She has worked with European intergovernmental and non-governmental organizations and bodies, such as the European Science Foundation, governmental agencies in several countries of East and West as well as independent organizations and committees of scholars. She has taken part in or directly led, the design and establishment of innovative new institutions, such as the European Research Council, Collegium Budapest or Central European University.</p><p>The Yehuda Elkana Fellow’s activities are held in partnership with Bard College through the Open Society University Network and supported by a grant from the Open Society Foundations.</p>]]></description>
                <author><![CDATA[The Institute for Human Sciences <iwm@iwm.at>]]></author>
                <pubDate>Sun, 03 Oct 2021 16:31:00 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Chances and limits of artificial intelligence]]></title>
                <link>https://www.beyond-eve.com/en/events/chances-and-limits-of-artificial-intelligence</link>
                <description><![CDATA[<p><em>When the computer decides about our insurance coverage</em></p><p>Artificial intelligence is being increasingly leveraged across industries to offer superior products and services and optimize business processes. The proliferation of AI, however, raises a number of ethical questions on data privacy, fairness, bias, and accountability. In the future, will AI decide who is insured and who is not? Who will get which level of insurance coverage? And will vulnerable groups be left behind uninsured? This talk focuses on the risks related to the use of AI in the insurance sector.</p><p><br></p><p><em>This event is organized by the Department of Strategy and Innovation.</em></p>]]></description>
                <author><![CDATA[Wirtschaftsuniversitaet Wien]]></author>
                <pubDate>Sun, 14 Mar 2021 20:41:37 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[From Eugenics to Big Data]]></title>
                <link>https://www.beyond-eve.com/en/events/from-eugenics-to-big-data-a-genealogy-of-criminal-risk-assessment-in-american-law-and-policy</link>
                <description><![CDATA[<p><strong>A Genealogy of Criminal Risk Assessment in American Law and Policy</strong></p><p><strong>Jonathan Simon</strong> (UC Berkeley, Professor of Criminal Justice Law)</p><p>Algorithms – and the actors behind them – are surveying and impacting ever more dimensions of our modern lives. They recommend which movies to watch; they calculate risk-appropriate credit scores; and they play a role in meting out “just” punishment; to only name a few areas. At the same time, they correct imperfect human decisions and add new informational dimensions to decisions prior&nbsp;impossible. To assess and evaluate the impeding transformations of normative orders in a predictive society, we approach algorithms in light of the juxtaposition of trust and control. Why and under which conditions do – or don’t – we trust algorithms? Indeed, can and should we trust them? Especially because their algorithmic normativity was (not) produced in justificatory fora where trust is brought about in and through social conflicts? But then, how much trust – if any – should algorithms put into us as citizens? For example, do they have to presume us non-dangerous and harmless? Vice versa, how much control do we need to retain over algorithms? And how much control should they exert over us? Can we use algorithms to control the effect of algorithms and thus create a meta-level of trust? Especially in order to negate, or as a matter of fact: to entertain, the freedom to deviate in the algorithmic society? These are but a few of the questions that internationally renowned speakers raise in “Algorithms between Trust and Control”, a lecture series convened by Indra Spiecker gen. Döhmann and Christoph Burchard, and co-organized by the research clusters ConTrust, Normative Orders and ZEVEDI in the line of the Frankfurt Talks on Information Law and under the auspices of Goethe University Frankfurt am Main.</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Tue, 27 Apr 2021 19:32:49 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[From Eugenics to Big Data]]></title>
                <link>https://www.beyond-eve.com/en/events/from-eugenics-to-big-data-2</link>
                <description><![CDATA[<p>A Genealogy of Criminal Risk Assessment in American Law and Policy</p><p><strong>Prof. Jonathan Simon</strong> (Professor of Criminal Justice Law, UC Berkeley)</p><p>Convenors: <strong>Prof. Christoph Burchard</strong> (Goethe University, Professor of Criminal Justice, PI of ConTrust and "Normative Orders") and <strong>Prof. Indra Spiecker gen. Döhmann</strong> (Goethe University, Professor of Public Law, PI of ConTrust)</p><p><strong>Presented by:</strong></p><p>Forschungsverbund "Normative Ordnungen" der Goethe-Universität Frankfurt am Main, "ConTrust" - ein Clusterprojekt des Landes Hessen, Frankfurter Gespräche zum Informationsrecht des Lehrstuhls für Öffentliches Recht, Umweltrecht, Informationsrecht und Verwaltungswissenschaften und Zentrum verantwortungsbewusste Digitalisierung</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Sun, 03 Oct 2021 15:58:18 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Never apologise, never explain: (How) can AI rebuild trust after conflicts?]]></title>
                <link>https://www.beyond-eve.com/en/events/ever-apologise-never-explain-how-can-ai-rebuild-trust-after-conflicts</link>
                <description><![CDATA[<p><strong>Never apologise, never explain: (How) can AI rebuild trust after conflicts?</strong></p><p><strong>Burkhart Schäfer</strong> (University of Edinburgh, Professor of Computational Legal Theory)</p><p><br></p><p>Opening Remarks by<strong> Prof. Enrico Schleiff </strong>(President of Goethe University)</p><p>Opening Remarks by <strong>Prof. Rainer Forst</strong> (Speaker of ConTrust and Normative Orders)</p><p>Welcoming Remarks &amp; Comment<strong> Prof. Klaus Günther </strong>(Dean of the Faculty of Law Goethe University)</p><p><br></p><p>Algorithms – and the actors behind them – are surveying and impacting ever more dimensions of our modern lives. They recommend which movies to watch; they calculate risk-appropriate credit scores; and they play a role in meting out “just” punishment; to only name a few areas. At the same time, they correct imperfect human decisions and add new informational dimensions to decisions prior&nbsp;impossible. To assess and evaluate the impeding transformations of normative orders in a predictive society, we approach algorithms in light of the juxtaposition of trust and control. Why and under which conditions do – or don’t – we trust algorithms? Indeed, can and should we trust them? Especially because their algorithmic normativity was (not) produced in justificatory fora where trust is brought about in and through social conflicts? But then, how much trust – if any – should algorithms put into us as citizens? For example, do they have to presume us non-dangerous and harmless? Vice versa, how much control do we need to retain over algorithms? And how much control should they exert over us? Can we use algorithms to control the effect of algorithms and thus create a meta-level of trust? Especially in order to negate, or as a matter of fact: to entertain, the freedom to deviate in the algorithmic society? These are but a few of the questions that internationally renowned speakers raise in “Algorithms between Trust and Control”, a lecture series convened by Indra Spiecker gen. Döhmann and Christoph Burchard, and co-organized by the research clusters ConTrust, Normative Orders and ZEVEDI in the line of the Frankfurt Talks on Information Law and under the auspices of Goethe University Frankfurt am Main.</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Tue, 27 Apr 2021 19:33:29 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Never apologise, never explain: (How) can AI rebuild trust after conflicts?]]></title>
                <link>https://www.beyond-eve.com/en/events/never-apologise-never-explain-how-can-ai-rebuild-trust-after-conflicts</link>
                <description><![CDATA[<p>Algorithms – and the actors behind them – are surveying and impacting ever more dimensions of our modern lives. They recommend which movies to watch; they calculate risk appropriate credit scores; and they play a role in meting out “just” punishment; to only name a few areas. At the same time, they correct imperfect human decisions and add new informational dimensions to decisions prior&nbsp;impossible. To assess and evaluate the impeding transformations of normative orders in a predictive society, we approach algorithms in light of the juxtaposition of trust and control. Why and under which conditions do – or don’t – we trust algorithms? Indeed, can and should we trust them? Especially because their algorithmic normativity was (not) produced in justificatory fora where trust is brought about in and through social conflicts? But then, how much trust – if any – should algorithms put into us as citizens? For example, do they have to presume us non-dangerous and harmless? Vice versa, how much control do we need to retain over algorithms? And how much control should they exert over us? Can we use algorithms to control the effect of algorithms and thus create a meta-level of trust? Especially in order to negate, or as a matter of fact: to entertain, the freedom to deviate in the algorithmic society?</p><p><strong>Prof. Burkhard Schäfer</strong> (University of Edinburgh, Professor of Computational Legal Theory)</p><p>Opening Remarks by <strong>Prof. Enrico Schleiff</strong> (President of Goethe University)</p><p>Opening Remarks by <strong>Prof. Rainer Forst</strong> (Speaker of ConTrust and Normative Orders)</p><p>Welcoming Remarks &amp; Comment <strong>Prof. Klaus Günther</strong> (Dean of the Faculty of Law Goethe University)</p><p>Convenors: <strong>Prof. Christoph Burchard</strong> (Goethe University, Professor of Criminal Justice, PI of ConTrust and "Normative Orders") and <strong>Prof. Indra Spiecker gen. Döhmann</strong> (Goethe University, Professor of Public Law, PI of ConTrust)</p><p><strong>Presented by:</strong></p><p>Forschungsverbund "Normative Ordnungen" der Goethe-Universität Frankfurt am Main, "ConTrust" - ein Clusterprojekt des Landes Hessen, Frankfurter Gespräche zum Informationsrecht des Lehrstuhls für Öffentliches Recht, Umweltrecht, Informationsrecht und Verwaltungswissenschaften und Zentrum verantwortungsbewusste Digitalisierung</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Sun, 03 Oct 2021 15:54:17 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Normative Orders]]></title>
                <link>https://www.beyond-eve.com/en/organisations/goethe-universitat-frankfurt-am-main-normativeorders</link>
                <description><![CDATA[<p>Freedom and justice, tolerance and participation: the researchers in the&nbsp;Research Centre "Normative Orders" of Goethe University are reflecting on such rights and principles in social life. How are political, legal, religious or economic orders established, and how do they change? How do structures of power crystallize in such processes of social dynamics? How are power and life chances distributed, on national and transnational levels? The topic is of high social relevance: we need to reflect on a world the orders of which are defended with power and yet are still fragile. The research of the Centre focuses on current social conflicts about a fair order of society in times of globalization, as well as its long prehistory. It examines the normative ideas that play a role in such processes and conflicts, as well as how they can be criticized or justified. Above all, the fundamentals of politics and law are highlighted in the humanities and social sciences.</p><p>Such questions are complex, and it for this reason that the&nbsp;Research Centre "Normative Orders" of Goethe University in Frankfurt works on an interdisciplinary basis: from philosophy, history, political science and law to ethnology, economics, sociology and theology.</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Thu, 15 Apr 2021 12:40:40 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[A Call for EU Cyber Diplomacy.]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/a-call-for-eu-cyber-diplomacy</link>
                <description><![CDATA[<p>In December 2020, the European Union (EU) presented its new strategy on cybersecurity with the aim of strengthening Europe’s technological and digital sovereignty. The document lists reform projects that will link cybersecurity more closely with the EU’s new rules on data, algorithms, markets, and Internet services. However, it clearly falls short of the development of a European cyber diplomacy that is committed to both “strategic openness” and the protection of the digital single market. In order to achieve this, EU cyber diplomacy should be made more coherent in its supranational, demo­cratic, and economic/technological dimensions. Germany can make an important con­tribution to that by providing the necessary legal, technical, and financial resources for the European External Action Service (EEAS).</p><p>In the latest issue of <a href="https://www.swp-berlin.org/en/swp-comments-en/" rel="noopener noreferrer" target="_blank"><strong>SWP Comment</strong></a>, <a href="https://leibniz-hbi.de/en/staff/matthias-c-kettemann" rel="noopener noreferrer" target="_blank"><strong>PD Dr. Matthias C. Kettemann</strong></a> and Annegret Bendiek explain why the new EU cybersecurity strategy is too one-sided. The focus should not only be on deterrence and defense, but also on trust and security. They advocate for promoting cyber diplomacy in the European Union.</p><p><strong>Bendiek, A.; Kettemann, M. C. (2021): Revisiting the EU Cybersecurity Strategy: A Call for EU Cyber Diplomacy. In: SWP Comment</strong></p>]]></description>
                <author><![CDATA[The Leibniz Institute for Media Research │ Hans-Bredow-Institut (HBI) <info@hans-bredow-institut.de>]]></author>
                <pubDate>Fri, 11 Jun 2021 22:24:40 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AI Regulation in Europe & Fundamental Rights]]></title>
                <link>https://www.beyond-eve.com/en/events/ai-regulation-in-europe-fundamental-rights</link>
                <description><![CDATA[<p>If we are building AI for the future we envision, AI applications must serve humanity and respect fundamental rights. Intergovernmental institutions and supranational entities which have published their AI principles in the last couple of years are now facing the challenge of how to regulate the use and effects of AI applications. The biggest risks and impact on rights are considered to be in health, education, security, defense, and public services. In a global landscape where Europe is positioning itself for AI governance leadership and setting the standards in AI for the protection of fundamental rights, the panelists will discuss the impact they strive for and the challenges associated. </p><p>• How does the work of Council of Europe (CAHAI - Ad hoc Committee on Artificial Intelligence), European Commission (AI HLEG - High-Level Expert Group on Artificial Intelligence), complement other AI policy initiatives under OECD, G20 &amp; UNESCO? Are all these initiatives aligned with each other in terms of AI regulation and priorities?</p><p>• How has the experience of COVID-19 changed the perspective, approach, and priorities for the regulation of AI? </p><p>• Is global regulation of high-risk AI applications a possibility in the face of AI race and national strategies? </p><p>• The public sector encapsulates most of the high-risk areas for AI and its impact on fundamental rights. What are the biggest challenges regulating the use of AI by the public sector? </p><p><br></p><p>Moderator: <strong>Merve Hickok </strong>AIethicist.org (US)</p><p><br></p><p>Speakers: </p><p><strong>Peggy Valcke</strong> Council of Europe Ad Hoc Committee on Artificial Intelligence (CAHAI) (INT) </p><p><strong>Friederike Reinhold</strong> AlgorithmWatch (DE) </p><p><strong>Oreste Pollicino</strong> OECD Global Partnership on Artificial Intelligence (IT) </p><p><strong>Alexandra Geese </strong>MEP (EU) Member of the European Parliament for Germany</p>]]></description>
                <author><![CDATA[Computers, Privacy & Data Protection]]></author>
                <pubDate>Sun, 14 Mar 2021 11:40:13 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Berkman Klein Center for Internet & Society]]></title>
                <link>https://www.beyond-eve.com/en/organisations/harvard-university-berkman-klein-center-for-internet-society</link>
                <description><![CDATA[<p>The Berkman Klein Center's mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions. We are a research center, premised on the observation that what we seek to learn is not already recorded. Our method is to build out into cyberspace, record data as we go, self-study, and share. Our mode is entrepreneurial nonprofit. </p><p><br></p><p><strong>The Center in Brief</strong></p><p>We bring together the sharpest, most thoughtful people from around the globe to tackle the biggest challenges presented by the Internet. As an interdisciplinary, University-wide center with a global scope, we have an unparalleled track record of leveraging exceptional academic rigor to produce real- world impact. We pride ourselves on pushing the edges of scholarly research, building tools and platforms that break new ground, and fostering active networks across diverse communities. United by our commitment to the public interest, our vibrant, collaborative community of independent thinkers represents a wide range of philosophies and disciplines, making us a unique home for open-minded inquiry, debate, and experimentation.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sun, 06 Dec 2020 12:02:59 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[School of Law]]></title>
                <link>https://www.beyond-eve.com/en/organisations/duke-university-school-of-law</link>
                <description><![CDATA[<strong>Duke Law School is an ambitious and innovative institution whose mission is to prepare students for responsible and productive lives in the legal profession by providing a rigorous legal education within a collaborative, supportive, and diverse environment.</strong>

As a community of scholars, the Law School also provides leadership at the national and international levels in enhancing the understanding of law, and in improving the law and legal institutions through public service, research, and scholarship of the highest caliber, reflecting, where appropriate, contributions from scholars in other disciplines within Duke University.

At Duke Law School, students and faculty experience and contribute to academic rigor in an interdisciplinary environment that supports and values creativity and innovation. Strategic investment in faculty, clinics, interdisciplinary centers, law journals and other student development opportunities, and in technology, as well as support for initiatives and opportunities to serve the public interest, ensure that the Law School remains on the cutting edge of legal scholarship, service, and education. These commitments are memorialized in the Duke Blueprint to LEAD, a set of principles for leadership growth that informs the development of committed, ethical lawyers, well-equipped for the 21st century.]]></description>
                <author><![CDATA[School of Law]]></author>
                <pubDate>Fri, 04 Dec 2020 16:06:42 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Kirkland & Ellis International LLP]]></title>
                <link>https://www.beyond-eve.com/en/organisations/kirkland-ellis-international-llp</link>
                <description><![CDATA[Kirkland & Ellis is a 2,200-attorney law firm representing global clients in private equity, M&A and other complex corporate transactions, restructuring, finance and tax. With 14 offices across the United States, Europe and Asia, our reputation for excellence is recognized worldwide. We believe in teamwork fueled by creative solutions, and we are dedicated to providing our clients with superior results.]]></description>
                <author><![CDATA[Kirkland & Ellis International LLP <karriere@kirkland.com>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:34:13 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[breaking.through - Erfolgreiche Juristinnen als Vorbilder für sich entdecken]]></title>
                <link>https://www.beyond-eve.com/en/organisations/breakingthrough-erfolgreiche-juristinnen-als-vorbilder-fur-sich-entdecken</link>
                <description><![CDATA[Erfolgreiche Juristinnen entdecken und kennenlernen: breaking.through macht Vorbilder in spannenden Porträt-Interviews sichtbar und gibt Dir die Chance, sie bei spannenden Events zu treffen!]]></description>
                <author><![CDATA[breaking.through - Erfolgreiche Juristinnen als Vorbilder für sich entdecken <nadja.harraschain@breakingthrough.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:34:07 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Stiftung Neue Verantwortung e. V.]]></title>
                <link>https://www.beyond-eve.com/en/organisations/stiftung-neue-verantwortung-e-v</link>
                <description><![CDATA[<strong>The Stiftung Neue Verantwortung</strong> (SNV) is an independent think tank that develops concrete ideas as to how German politics can shape technological change in society, the economy and the state. In order to guarantee the independence of its work, the organisation adopted a concept of mixed funding sources that include foundations, public funds and businesses.

Issues of digital infrastructure, the changing pattern of employment, IT security or internet surveillance now affect key areas of economic and social policy, domestic security or the protection of the fundamental rights of individuals. The experts of the SNV formulate analyses, develop policy proposals and organise conferences that address these issues and further subject areas.

Many excellent research institutes and think tanks already contribute to the fields of foreign policy, economic policy or environmental policy in Germany. Issues related to new technologies however lack comparable expert organisations that focus on current politics and social debates. The SNV wants to fill this gap in the landscape of German institutes and think tanks. This think tank seeks to provide a focal point for all people whose work covers current political and social questions of the cross-sectional issue of digitalization.]]></description>
                <author><![CDATA[Stiftung Neue Verantwortung e. V. <info@stiftung-nv.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Sustainability Transformation Conference 2020]]></title>
                <link>https://www.beyond-eve.com/en/events/sustainability-transformation-conference-2020</link>
                <description><![CDATA[<p><strong>The German Environment Agency (UBA) and the German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (BMU) invite you to attend a conference on “Sustainability Transformation Conference 2020: Socio-ecological transformation on the fast track – Covid-19 as catalyst of change?”.</strong></p><p>The digital conference will focus on the transformation of the economy and society to shape a sustainable future. The event will be hosted by <strong>Svenja Schulze, German Minister of the Environment</strong>, and <strong>Dirk Messner, President of the German Environment Agency</strong>.</p><p><a href="https://www.bmu-events.de/sites/default/files/gc-event/uploads/201119_transformationsconference_final_programme.pdf" rel="noopener noreferrer" target="_blank"><strong>The agenda for the conference can be found here.</strong></a></p><p>In light of the existing and expected restrictions on travel and group gatherings in response to the COVID-19 pandemic, the decision was taken to hold the conference as a virtual event. Attendees can participate in the conference through a live stream on the homepage of the German Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety, and will have the opportunity to interact with speakers and other participants using voting tools and Q&amp;A sessions (no installation needed). Links will be provided in the official invitation. There will be no registration and the conference will be held in English.</p>]]></description>
                <author><![CDATA[Umweltbundesamt <buergerservice@uba.de>]]></author>
                <pubDate>Sat, 23 Oct 2021 12:33:42 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Normative Order of the Internet: A Theory of Rule and Regulation Online]]></title>
                <link>https://www.beyond-eve.com/en/events/the-normative-order-of-the-internet-a-theory-of-rule-and-regulation-online</link>
                <description><![CDATA[<p>There is order on the internet, but how has this order emerged and what challenges will threaten and shape its future? This study shows how a legitimate order of norms has emerged online, through both national and international legal systems. It establishes the emergence of a normative order of the internet, an order which explains and justifies processes of online rule and regulation. This order integrates norms at three different levels (regional, national, international), of two types (privately and publicly authored), and of different character (from ius cogens to technical standards). Matthias C. Kettemann assesses their internal coherence, their consonance with other order norms and their consistency with the order's finality. The normative order of the internet is based on and produces a liquefied system characterized by self-learning normativity. In light of the importance of the socio-communicative online space, this is a book for anyone interested in understanding the contemporary development of the internet. <strong>Dr. Matthias C. Kettemann</strong>, LL.M. (Harvard), Leibniz Institute for Media Research | Hans-Bredow-Insitut Hamburg, Forschungsverbund "Normative Ordnungen" der Goethe-Universität Um Anmeldung wird gebeten. Die Veranstaltung wird virtuell über GoToMeeting stattfinden. </p><p><br></p><p>Die Einwahldaten werden nach der Anmeldung übermittelt. Oxford University Press 2020 Veranstalter: Forschungsverbund "Normative Ordnungen" der Goethe-Universität Frankfurt am Main, Leibniz-Institut für Medienforschung | Hans-Bredow-Institut, Sustainable Computing Lab, WU Wien, Humboldt-Institut für Internet und Gesellschaft (HIIG) und Oxford University Press</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:43:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Race, Policing, and Guns]]></title>
                <link>https://www.beyond-eve.com/en/events/race-policing-and-guns</link>
                <description><![CDATA[In the ongoing national conversations about policing, protest, racism, and violence, the role of guns plays an important part. And with gun purchasing, carrying, and brandishing increasingly in the news during the Covid-19 pandemic, the intersection of these issues takes on heightened importance. Join us for an online panel discussion about these issues. Panelists include Duke's own Darrell Miller, Melvin G. Shimm Professor of Law, Associate Dean for Intellectual Life, and Faculty Co-Director of the Center for Firearms Law; Kami Chavis, Associate Provost for Academic Initiatives, Professor of Law, and Director of Criminal Justice Program at Wake Forest University School of Law; Alice Ristroph, Professor of Law at Brooklyn Law School; and Stuart Schrader, Lecturer and Assistant Research Scientist in Sociology at Johns Hopkins University. Sponsored by the Center for Firearms Law. Contact Theresa Boyce for more information.]]></description>
                <author><![CDATA[School of Law]]></author>
                <pubDate>Sat, 05 Dec 2020 22:54:20 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Defund the Police: A Discussion and Q&A]]></title>
                <link>https://www.beyond-eve.com/en/events/defund-the-police-a-discussion-and-qa</link>
                <description><![CDATA[Join a coalition of student groups for a discussion and Q&A on the merits, issues, and trade-offs of defunding-to-reallocate budget initiatives.

Sponsored by Black Law Students Association, Latin American Law Students Association, Asian Pacific American Law Students Association, Womxn of Color Collective, American Civil Liberties Union, National Lawyers Guild, Duke Immigrant and Refugee Project, Human Rights Law Society, and OutLaw. For more information, please contact Luis Basurto Villanueva.]]></description>
                <author><![CDATA[School of Law]]></author>
                <pubDate>Sat, 05 Dec 2020 22:55:07 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Legal Tech – potentials and applications of technology based legal consulting ]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/legal-tech-potentials-and-applications-of-technology-based-legal-consulting</link>
                <description><![CDATA[<p>Since there is currently a high level of dynamism with regard to the development of new business models and the establishment of legal tech companies with a focus on legal advice and legal services, the TAB has published a study on their potential and applications.</p><p>TAB's policy brief in English <a href="https://www.tab-beim-bundestag.de/en/pdf/publications/tab-fokus/TAB-Fokus-024.pdf" rel="noopener noreferrer" target="_blank">TAB-Fokus no.&nbsp;24 PDF&nbsp;[2,58&nbsp;MB]</a> provides an overview of Legal Tech services and applications, assesses the potentials, risks and opportunities involved and explores further potential needs for action.</p>]]></description>
                <author><![CDATA[KIT - Karlsruher Institut für Technologie - Office of Technology Assessment at the German Bundestag <buero@tab-beim-bundestag.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 19:41:13 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Code of Capital: How the Law Creates Wealth and Inequality]]></title>
                <link>https://www.beyond-eve.com/en/events/the-code-of-capital-how-the-law-creates-wealth-and-inequality</link>
                <description><![CDATA[Please join us to hear Katharina Pistor discuss her new book, The Code of Capital: How the Law Creates Wealth and Inequality. The book is a major intervention about the nature of modern capitalism. Pistor argues for the central role of the law in shaping the distribution of wealth and makes a compelling case that it is law that creates capital itself. 

<strong>Katharina Pistor</strong> is the Edwin B. Parker Professor of Comparative Law at Columbia Law School and director of the Law School's Center on Global Legal Transformation. Her work spans comparative law and corporate governance, law and finance, and law and development. She is the co-recipient of the Max Planck Research Award (2012) and a member of the Berlin-Brandenburg Academy of Science. Lunch will be served. Sponsored by the Global Financial Markets Center.]]></description>
                <author><![CDATA[School of Law]]></author>
                <pubDate>Sat, 05 Dec 2020 22:57:57 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Punishment without Crime]]></title>
                <link>https://www.beyond-eve.com/en/events/punishment-without-crime</link>
                <description><![CDATA[<strong>A Book Panel Discussion with Prof. Alexandra Natapoff</strong>

Panel discussion of Alexandra Natapoff's new book, Punishment Without Crime: How Our Massive Misdemeanor System Traps the Innocent and Makes America More Unequal. The book describes the powerful influence that misdemeanors exert over the entire U.S. criminal system. It was selected by Publishers Weekly as a Best Book of 2018. 

<strong>Natapoff</strong>is a professor at UCI Law School and has previously served as an Assistant Federal Public Defender in Baltimore, Maryland. Panelists include Adam Gershowitz, Professor at William & Mary Law School, Eisha Jain, Visiting Professor at Duke Law, and Vikrant Reddy, Senior Research Fellow at the Charles Koch Institute. Professor Brandon Garrett (Duke Law) will moderate. Sponsored by the Duke Center for Science and Justice and the Duke Criminal Law Society. F]]></description>
                <author><![CDATA[School of Law]]></author>
                <pubDate>Sat, 05 Dec 2020 22:55:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Crypto-Politics. Encryption and Democratic Practices in the Digital Era]]></title>
                <link>https://www.beyond-eve.com/en/events/crypto-politics-encryption-and-democratic-practices-in-the-digital-era</link>
                <description><![CDATA[<p>The volume centres on the debates on digital encryption in Germany and the USA, during the aftermath of Edward Snowden’s leaks, which revolved around the value of privacy and the legitimacy of surveillance practices. Using a discourse analysis of mass media and specialist debates, it shows how these are closely interlinked with technological controversies and how, as a result, contestation emerges not within one public sphere but within multiple expert circles. The book develops the notion of ‘publicness’ in order to grasp the political significance of these controversies, thereby making an innovative contribution to Critical Security Studies by introducing digital encryption as an important site for understanding the broader debates on cyber security and surveillance.</p><p><br></p><p>Mit:<strong> Dr. Linda Monsees</strong> (Autorin, Postdoktorandin am Exzellenzcluster "Die Herausbildung normativer Ordnungen"), <strong>Prof. Peter Burgess</strong> (Professor and Chair of Geopolitics of Risk at the Ecole Normale Supérieure, Paris) und <strong>Prof. Dr. Nicole Deitelhoff</strong> (Direktorin des Leibniz-Instituts Hessische Stiftung Friedens- und Konfliktforschung, Principal Investigator des Exzellenzclusters "Die Herausbildung normativer Ordnungen", Professorin für Internationale Beziehungen und Theorien globaler Ordnungen der Goethe-Universität)</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Sat, 05 Dec 2020 22:02:11 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Rethinking Responsibility]]></title>
                <link>https://www.beyond-eve.com/en/events/rethinking-responsibility</link>
                <description><![CDATA[After the publication of Hans Jonas' <strong>Das Prinzip Verantwortung</strong> forty years ago, the principle of responsibility has become a key concept in moral and political debates. Yet the unconditional responsibility for the possibility of the existence of future generations – not only of humans, but also of other living beings – is invariably accompanied by the "heuristics of fear," which presupposes imagining the worst-case scenario and a pronouncedly bleak future. The dystopian principle of responsibility was introduced as a response to Bloch's Das Prinzip Hoffnung, which envisions the possibility of a utopian future for humanity. The proposed project will discuss these two principles and will argue that they are not mutually exclusive, so that, while still preserving the imperative of responsibility, one can maintain a utopian ideal as a regulative idea for moral and political action.

<strong>Dmitri Nikulin</strong> ist Professor für Philosophie an der New School for Social Research in New York. Von August ‒ Oktober 2019 ist er auf Einladung von Professor Rainer Forst und dem Exzellenzcluster »Die Herausbildung normativer Ordnungen« Fellow am Forschungskolleg Humanwissenschaften der Goethe-Universität.

Um Anmeldung bis zum 14. Oktober 2019 wird gebeten]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Sat, 05 Dec 2020 23:00:02 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[“Robot judges” without training?]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/robot-judges-without-training</link>
                <description><![CDATA[<p><em>Discussing the implementation of automated decision making systems as savior of overburdened legal decision makers is en vogue. But if employed instead of human decision makers and with rising complexity of legal decision, they face hardly resolvable structural problems and barriers. </em><strong><em>Dr. Stephan Dreyer </em></strong><em>and</em><strong><em> Johannes Schmees</em></strong><em> explain this by reference to four technical and legal challenges. By that, a differentiated perspective is sought to be established in the emerging discourse with an eye on technical and legal realities. </em></p><p>doi: <a href="https://doi.org/10.5281/zenodo.3484550" rel="noopener noreferrer" target="_blank">10.5281/zenodo.3484550</a></p><p><strong><em>Dr. Stephan Dreyer</em></strong><em> is Senior Researcher, </em><strong><em>Johannes Schmees</em></strong><em> is Junior Researcher at the Leibniz-Institute for Media Research | Hans-Bredow-Institut. This entry is based on a forthcoming and extensive article which came to being in the context of the interdisciplinary research project “Deciding about, by and together with ADM-Systems.”</em></p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Fri, 18 Dec 2020 14:23:22 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Rethinking Democratic Athens and Republican Rome in an Age of Plutocracy and Populism]]></title>
                <link>https://www.beyond-eve.com/en/events/rethinking-democratic-athens-and-republican-rome-in-an-age-of-plutocracy-and-populism</link>
                <description><![CDATA[<p>Two ancient polities, Athenian democracy and the Roman republic, figure prominently in debates over the contemporary crisis of “liberal,” “electoral” or “representative” democracy. Democratic Athens and republican Rome are often invoked as models to be imitated or avoided in efforts to address rising political inequality and rampant political corruption in our plutocratic age. I criticize recent books by Philip Pettit, Nadia Urbinati and Josiah Ober that evaluate majoritarian and populist solutions, inspired by Athenian or Roman politics, to address the contemporary crisis of democracy. In response, I advocate classspecific or randomly distributed political offices, citizen referenda, and popularly judged political trials as ancient-inspired reforms intended to address the problems of unaccountable and unresponsive elites, socio-economic inequality and political corruption that plague contemporary democracies.</p><p><br></p><p><em>CV</em></p><p><strong>John P. McCormick</strong> is Professor of Political Science at the University of Chicago. He is the author of <em>Carl Schmitt’s Critique of Liberalism: Against Politics as Technology</em> (Cambridge University Press, 1997); <em>Weber, Habermas and Transformations of the European State: On Constitutional, Social and Supranational Democracy</em> (Cambridge University Press, 2007); <em>Machiavellian Democracy</em> (Cambridge University Press, 2011); and <em>Reading Machiavelli</em> (Princeton 2018). Professor McCormick has received the following fellowships: Fulbright Scholarship, the Center for European Law &amp; Politics, the University of Bremen in Germany (1994 – 95); Jean Monnet Fellowship, the European University Institute in Florence, Italy (1995 – 96); Radcliffe Institute for Advanced Study Fellowship, Harvard University (2008 – 09); Rockefeller Foundation Resident Fellowship, Bellagio, Italy (2013); and National Endowment for the Humanities Grant (2017 – 18).</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Sun, 06 Dec 2020 13:35:33 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Cyberlaw and Human Rights]]></title>
                <link>https://www.beyond-eve.com/en/events/cyberlaw-and-human-rights</link>
                <description><![CDATA[<p>After two decades of little direct legislation of the internet, national laws and related court decisions meant to govern cyberspace are rapidly proliferating worldwide. They are becoming building blocks in new legal frameworks that will shape the evolution of Internet governance and policymaking for years to come.</p><p>In the Global South and particularly under repressive regimes, these frameworks can be imposed with little regard for human rights obligations and without a full understanding of the technologies and processes they regulate or their implications for the preservation of the core values of the internet: interoperability, universality, and free expression and the free flow of information.</p><p>This panel brings together practitioners from five international organizations monitoring the development of legislation and case law related to cyberspace to discuss the implications for the future of human rights online.</p><p><br></p><h2>Panelists</h2><p><em>Moderator</em></p><p><strong>Robert Faris</strong> is the research director at the Berkman Klein Center where he contributes and provides oversight to research at the center. His research includes the study of digital communication mechanisms by civil society organizations and social movements, and the emergence and impact of digitally-mediated collective action, as well as the influence of networked digital technologies on democracy and governance and the evolving role of new media in political change.</p><p><br></p><p><strong>Dr. Hawley Johnson</strong> is the Project Manager for Columbia Global Freedom of Expression, an initiative to advance the understanding of international and national norms and institutions that best protect the free flow of information and expression in an interconnected global community. Hawley has over twelve years of experience in international media development both academically and professionally, with a focus on Eastern Europe. She recently worked with the award winning Organized Crime and Corruption Reporting Project to launch the Investigative Dashboard (ID), a joint effort with Google Ideas offering specialized databases and research tools for journalists in emerging democracies.</p><p><br></p><p><strong>Robert Muthuri</strong> is currently a Research Fellow – ICT at the Centre for IP and IT (CIPIT) at the Strathmore School of Law. He is a Legal Knowledge Engineer working at the intersection of legal theory and AI. Robert is an Advocate of the High Court of Kenya who, with the conviction that technology had a lot more to offer the legal domain, further pursued a career in legal informatics. </p><p><br></p><p><strong>Juan Carlos Lara</strong> is a Chilean lawyer, specializing in law and technology, currently working as the manager of the Public Policy and Research team at Derechos Digitales, a non governmental organisation based in Santiago de Chile that promotes and defends digital rights in Latin America. He has worked as a consultant in intellectual property for public and private entities, has been a research assistant at the Centre of Studies in Cyber Law at the University of Chile, and is currently an LL.M. candidate at UC Berkeley. In Derechos Digitales, he leads research and policy analysis on technology and data privacy, equality, freedom of expression, and access to knowledge and human rights in online platforms.</p><p><br></p><p><strong>Gayatri Khandhadai</strong> is a lawyer with a background in international law and human rights, international and regional human rights mechanisms, research, and advocacy. She previously worked with national and regional human rights groups, focusing on freedom of expression. She coordinated the IMPACT — India, Malaysia, Pakistan Advocacy for Change through Technology — project with the Association for Progressive Communications. Her current focus is on digital rights in Asia with specific emphasis on freedoms of expression, assembly, and association on the Internet.</p><p><br></p><p><strong>Jessica Dheere</strong> is co-founder of the Beirut–based digital rights research, training, and advocacy organization SMEX (smex.org) and a 2018-19 research fellow at the Berkman Klein Center for Internet and Society at Harvard University. She is also incubating director of the recently launched CYRILLA Collaborative (cyrilla.org), a global initiative that maps and analyzes the emergence and evolution of legal frameworks in digitally networked environments through open research, data models, and databases.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:07:58 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[GDPR and Cambridge Analytica: What is the Future of Transatlantic Privacy?]]></title>
                <link>https://www.beyond-eve.com/en/events/gdpr-and-cambridge-analytica-what-is-the-future-of-transatlantic-privacy</link>
                <description><![CDATA[January 28th of every year is Data Privacy Day. Data Privacy Day commemorates the Jan. 28, 1981, signing of Convention 108, the first legally binding international treaty dealing with privacy and data protection. Leonardo Cervera Navas, Prof. David Hoffman and Prof. Jolynn Dellinger started Data Privacy Day eleven years ago with an event at Duke Law School to discuss transatlantic cooperation in privacy and data protection. 

Prof. Hoffman moderates a panel discussion including <strong>Prof. Jolynn Dellinger, Mr. Leonardo Cervera Navas **(Director of the European Data Protection Supervisor), and **Dan Caprio</strong> (Former Chief Privacy Officer at the U.S. Department of Commerce and Chief of Staff to U.S. Federal Trade Commissioner Orson Swindle), to discuss the current state of privacy, including the recently enacted E.U. General Data Protection Regulation and efforts to pass a comprehensive U.S. privacy law. 

Sponsored by Prof. Hoffman, Intel Corporation, the Triangle Privacy Research Hub, the Center on Law, Ethics and National Security, and the Duke Center on Law & Technology.]]></description>
                <author><![CDATA[School of Law]]></author>
                <pubDate>Sat, 05 Dec 2020 21:39:02 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Women in Law 2018]]></title>
                <link>https://www.beyond-eve.com/en/events/women-in-law-2018</link>
                <description><![CDATA[We invite you to learn more about Kirkland & Ellis, one of the leading global law firms. Meet our team of dynamic women attorneys who will introduce you to the Kirkland spirit and discover our strong commitment to the advancement of women in law.

Sue Flood, an award-winning wildlife photographer, film producer and expedition leader, will inspire and motivate us with her fascinating workshop “My adventures as a wildlife photographer from the North to the South Pole and in-between. How I pursued my dream job in a male-dominated profession”.

The event will conclude with drinks and dinner, socializing and networking.

We look forward to receiving applications from advanced law students, legal trainees and recent graduates. Please send your application including CV (German or English), key word “Women in Law”, to karriere@kirkland.com.

Hotel accommodation from Friday to Saturday will be arranged and travel expenses will be reimbursed. For questions please contact Katharina Netzle at +49 89 2030 6079.

Applications close on 19 October 2018.]]></description>
                <author><![CDATA[Kirkland & Ellis International LLP <karriere@kirkland.com>]]></author>
                <pubDate>Sun, 06 Dec 2020 11:08:48 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Time is Right for Europe to Take the Lead in Global Internet Governance]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/the-time-is-right-for-europe-to-take-the-lead-in-global-internet-governance</link>
                <description><![CDATA[<p>Europe is a key normative power. Its legitimacy as a force for ensuring the reign of rule of law in international relations is unparalleled. It also packs an economic punch. In data protection and the fight against cybercrime, European norms have been successfully globalized. The time is right to take the next step: Europe must now become the international normative leader for developing a new deal on internet governance. To ensure this, European powers should commit to rules that work in security, economic development and human rights on the internet and implement them in a reinvigorated IGF.</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Sat, 12 Dec 2020 21:29:11 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Hintergrundgespräch: Predictive Policing in Deutschland]]></title>
                <link>https://www.beyond-eve.com/en/events/hintergrundgesprach-predictive-policing-in-deutschland</link>
                <description><![CDATA[Polizeibehörden in sechs Bundesländern arbeiten derzeit mit unterschiedlichen algorithmischen Systemen, die Vorhersagen zu zukünftigen Kriminalitätsschwerpunkten erlauben sollen. Dieses “Predictive Policing” ist umstritten: Einerseits sorgen solche Systeme in den Behörden dafür, dass die eigene Polizeiarbeit analysiert und verbessert wird. Andererseits führt Predictive Policing – insbesondere wenn gleichzeitig Polizeibefugnisse erweitert werden – zu einem grundlegenden Wandel der polizeilichen Arbeit, der kritisch zu hinterfragen ist. Wie kann diese Technik angewendet werden, ohne dabei Grundrechte einzuschränken oder das Prinzip der Unschuldsvermutung auszuhebeln?

Joachim Eschemann, Leiter des Referats für Kriminalitätsangelegenheiten im Düsseldorfer Innenministerium, war in Nordrhein-Westfalen für die Entwicklung des Predictive-Policing-Systems SKALA verantwortlich. Im Rahmen eines Hintergrundgesprächs am 30.8.2018 um 18:30 Uhr spricht Dr. Tobias Knobloch mit ihm darüber, wie Kriminalitätsprognosen im Alltag der Polizei eingesetzt werden, welche Daten verwendet werden sowie über Erfolge und Schwierigkeiten des Projekts SKALA in NRW.]]></description>
                <author><![CDATA[Stiftung Neue Verantwortung e. V. <info@stiftung-nv.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Just & In-Time Climate Policy: Four Initiatives for a Fair Transformation]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/just-in-time-climate-policy-four-initiatives-for-a-fair-transformation</link>
                <description><![CDATA[Limiting global warming to well below 2°C requires the rapid decarbonization of the global economy. If this enterprise fails, we will jeopardize the life-support systems of future generations. The longer the transformation towards climate compatibility is delayed, the more severe the risks and damage will be for a growing number of people. The transformation requirements and the damage caused by climate change have an unequal temporal, geographical and social distribution – as do the respective possibilities for dealing with them. The WBGU therefore proposes a just & in-time transformation that takes into account all people affected, empowers them, holds those responsible for climate change accountable, and creates both global and national prospects for the future. The WBGU proposes that the German Federal Government should promote four exemplary initiatives of a just & in-time climate policy targeting (1) the people affected by the structural change towards climate compatibility (e.g. in coal-mining regions), (2) the legal rights of people harmed by climate change, (3) the dignified migration of people who lose their native countries due to climate change, and (4) the creation of financing instruments for just & in-time transformation processes.]]></description>
                <author><![CDATA[German Advisory Council on Global Change <wbgu@wbgu.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 21:06:12 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Ethics and algorithmic processes for decision making and decision support]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/ethics-and-algorithmic-processes-for-decision-making-and-decision-support</link>
                <description><![CDATA[<p>Far from being a thing of the future, automated decision-making informed by algorithms (ADM) is already a widespread phenomenon in our contemporary society. It is used in contexts as varied as advanced driver assistance systems, where cars are caused to brake in case of danger, and software packages that decide whether or not a person is eligible for a bank loan. Actions of government are also increasingly supported by ADM systems, whether in “predictive policing” or deciding whether a person should be released from prison. What is more, ADM is only just in its infancy: in just a few years’ time, every single person will be affected daily in one way or another by decisions reached using algorithmic processes. Automation is set to play a part in every area of politics and law.</p><p>Current ethical debates about the consequences of automation generally focus on the rights of individuals. However, algorithmic processes – the major component of automated systems – exhibit a collective dimension first and foremost. This can only be addressed partially at the level of individual rights. For this reason, existing ethical and legal criteria are not suitable (or, at least, are inadequate) when considering algorithms generally. They lead to a conceptual blurring with regard to issues such as privacy and discrimination, when information that could potentially be misused to discriminate illegitimately is declared private. Our aim in the present article is, first, to bring a measure of clarity to the debate so that such blurring can be avoided in the future. In addition to this, we discuss ethical criteria for technology which, in the form of universal abstract principles, are to be applied to all societal contexts.</p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Sat, 12 Dec 2020 21:54:14 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[BigBrotherAwards 2018]]></title>
                <link>https://www.beyond-eve.com/en/events/bigbrotherawards-2018</link>
                <description><![CDATA[Seit 2000 organisiert Digitalcourage e.V. die BigBrotherAwards in Deutschland, „die Oscars für Überwachung“ (Le Monde). Durch die BigBrotherAwards wurden u.a. die Payback-Karte als Datensammelkarte, die Urintests an Auszubildenden bei der Bayer AG, die Machenschaften beim Mautsystem von TollCollect und Tchibos schwunghafter Handel mit Kundendaten bekannt gemacht. Außerdem haben wir aufgedeckt, dass die Metro Group RFID-Chips in den Kundenkarten versteckt hatte und nachgewiesen, warum Facebook gefährlich ist.

Die BigBrotherAwards sind dabei oft ihrer Zeit voraus. Der Skandal über die Überwachung der Angestellten bei Lidl wurde erst ein Jahr nach unserer Auszeichnung in der breiten Öffentlichkeit bekannt. Als Rena Tangens und padeluun im Jahr 2013 forderten "Google muss zerschlagen werden", war das eine radikale Forderung, die erst 2014 auch von Politiker.innen und Journalist.innen aufgegriffen wurde.

Seit den Enthüllungen von Edward Snowden sind die BigBrotherAwards keineswegs entbehrlich geworden. Jedes Jahr legen wir erneut den Finger in die Wunde und setzen Maßstäbe. Damit wirken wir in Gesellschaft und Politik.]]></description>
                <author><![CDATA[digitalcourage <mail@digitalcourage.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:33 +0100</pubDate>
                            </item>
            </channel>
</rss>
