<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <atom:link href="https://www.beyond-eve.com/technialarticles/rss" rel="self" type="application/rss+xml" />
        <title><![CDATA[Beyond EVE: Events]]></title>
        <link><![CDATA[https://www.beyond-eve.com/technialarticles/rss]]></link>
        <description><![CDATA[]]></description>
        <language>de-DE</language>
        <pubDate>Sun, 22 Feb 2026 13:12:37 +0100</pubDate>

                    <item>
                <title><![CDATA[The State of AI Regulation Across the Globe]]></title>
                <link>https://www.beyond-eve.com/en/events/the-state-of-ai-regulation-across-the-globe</link>
                <description><![CDATA[<p>Artificial intelligence remains in the regulatory hot-seat, with governments racing to develop new and refined regulations and protections for AI. While the EU AI Act is the first mover in the space and the Brussels effect is in play, countries across the globe weigh the key issues of risks, rights, and economic opportunity against the needs of their region. In this learning call, panelists explore different approaches to AI regulation from policy experts from Africa, Latin America, North America, and Europe.&nbsp;Panelists discuss how national priorities are implemented in different regions, how the EU AI Act has a global impact, how Rwanda and Brazil serve as new models of AI regulation, and how governments, such as the US, take sectoral, legislative, and regulatory bodies approaches to AI governance.</p><p><br></p><p><strong>Speakers</strong></p><p><a href="https://www.issa.org/speaker/ridwan-oloyede/" rel="noopener noreferrer" target="_blank">Ridwan Oloyede</a>&nbsp;is the assistant director for the professional development workflow at Certa Foundation’s Center for Law and Innovation. He most recently co-authored Certa Foundation’s report on the state of AI regulation in Africa and analyzed Rwanda’s unique approach to AI regulation as the first mover in Africa. Previously Ridwan Oloyede co-founded Tech Hive Advisor, PrivacyLensAfrica, and Privacy Bar &amp; Bants. He has been designated as an expert at the Council of Europe’s Data Protection Unit.&nbsp;</p><p><a href="https://www.direito.uerj.br/teacher/carlos-affonso-de-souza/" rel="noopener noreferrer" target="_blank">Carlos Affonzo De Souza</a> is director of the Institute of Technology and Society of Rio de Janeiro and a professor of law at Rio de Janeiro State University and the University of Ottawa Law School. Carlos De Souza is an Affiliated Fellow at Yale Law School’s Information and Society Project. He is a member of the Executive Committee of the Global Network of Internet &amp; Society Research Centers, and his recent work has focused on Brazil’s efforts to regulate AI ahead of November’s G20 meeting and the emerging rights-based approach in that regulation.</p><p><a href="https://hls.harvard.edu/faculty/mason-kortz/" rel="noopener noreferrer" target="_blank">Mason Kortz</a> is a clinical instructor at Harvard Law School’s Cyberlaw Clinic at the Berkman Klein Center for Internet &amp; Society. He brings his legal training and background as a software and database developer to his work at the Cyberlaw Clinic. His recent research focuses on the law of artificial intelligence and algorithms, and he has written and presented on how algorithmic decision-making has impacted intellectual property rights, product liability requirements, and the criminal legal system.</p><p><a href="https://connection.mit.edu/gabriele-mazzini" rel="noopener noreferrer" target="_blank">Gabriele Mazzini</a> is the architect and lead author of the EU AI Act by the European Commission, where he has focused on technology law and policy for the past seven years. Prior to joining the European Commission, Gabriele served in the European Parliament and the Court of Justice and was Associate General Counsel at the Millennium Villages Project, an international development initiative across several sub-Saharan countries. Gabriele Mazzini is a Connection Science Fellow at the Massachusetts Institute of Technology.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sun, 30 Jun 2024 16:57:19 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Rob Kitchin: Navigating Smart Cities]]></title>
                <link>https://www.beyond-eve.com/en/events/rob-kitchin-navigating-smart-cities</link>
                <description><![CDATA[<p>The vision of the smart city promises efficient administration, improved quality of life for its residents, and a major contribution to sustainability. But what are the logics and ideals behind these promises and expectations? What are the perils when urban planning is determined by technology and data?</p><p>In his lecture, Rob Kitchin addresses a number of political and normative questions related to smart cities. He discusses the ethical values and principles that determine the desirable urban environment we want to create and live in. Therefore, his presentation explores how these measures ensure equal access to technology and decision-making, foster social justice and agency among all citizens. It also examines how these concerns are conceived and operationalised within smart cities around the world. How do these models and visions vary internationally, for example, between Asian and European countries? The final part of the talk will explore the ‘right to the smart city’ and ‘decentering the smart city’. How can these notions be used to create cities that truly prioritise human needs?</p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Sun, 18 Aug 2024 16:51:50 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[World without cash?]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/world-without-cash</link>
                <description><![CDATA[<p><strong>TAB report on changes in traditional banking and payment systems and changes in its power structure provides an overview of developments in the payment traffic and changes in its power structure.</strong></p><p>In Germany, cash is the only unrestricted legal tender and still the most commonly used means of payment. Compared with non-cash means of payment, cash is an important corrective in payment transactions. No other non-cash means of payment achieves a similarly high level of inclusion and provides comparable protection of privacy. Nevertheless, the use of non-cash means of payment continues to increase in Germany. Card-based payment methods are of particular importance - either directly with the plastic card (debit or credit card) or with the virtual card via which non-cash payment methods are processed in the background, as is common in mobile payment and Internet payment methods.</p><p>BigTechs - large companies with established technology platforms such as Alibaba, Amazon, and Facebook - are now established players in payments. Given the increasing presence and market power of U.S. card providers and BigTechs, as well as the likely growing influence of Chinese BigTechs in payments, the question of how to preserve the European banking industry's ability to act will arise more strongly in the future.</p><p>The TAB report provides an overview of developments in payment transactions up to and including February 2021, examining and comparing the specific characteristics of cash and selected non-cash payment solutions as well as payment behavior in Germany, Sweden, and China. The brief study is rounded off by an examination of the changing power structure in payment transactions as a result of the emergence of new players and the reactions of traditional credit institutions and central banks to this.</p><p>The TAB report and the accompanying policy brief TAB-Fokus Nr. 37 (both currently only in German) are available online. An English translation of the TAB-Fokus will follow soon.</p>]]></description>
                <author><![CDATA[KIT - Karlsruher Institut für Technologie - Office of Technology Assessment at the German Bundestag <buero@tab-beim-bundestag.de>]]></author>
                <pubDate>Sun, 10 Jul 2022 11:58:52 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Krisztina Rozgonyi and Marius Dragomir: Freedom of expression in Central and Eastern Europe]]></title>
                <link>https://www.beyond-eve.com/en/events/krisztina-rozgonyi-and-marius-dragomir-freedom-of-expression-in-central-and-eastern-europe</link>
                <description><![CDATA[<p>Media systems in Central and Eastern Europe have been subject to significant transformation processes over the past two decades. In the face of populist tendencies and with large swathes of the media being captured by governments and oligarchs, the space for independent journalism has dramatically shrunk in most of the region’s nations. For citizens, this results in experiencing their everyday realities through multiple layers of distorted communication channels. This situation is further compounded by global digital technologies such as the algorithm-driven manipulation of content. These rapid developments have additional negative effects on national and regional media diversity.</p><p>How can economically vulnerable media regain editorial independence and stand up against the powerful propaganda channels? To find answers to this question, this edition of the lecture series features two scholars who specialize in the digital transformation of media systems in democratic societies and the particular challenges in Central and Eastern Europe.</p><p><em>Marius Dragomir</em> talks about the changes experienced by media systems in Central and Eastern Europe and presents the findings of his research into the impact of media capture on independent journalism. He highlights risks that independent journalism is likely to face in the near future. <em>Krisztina Rozgonyi</em> analyses how digital societies in Central and Eastern Europe are embedded in a politically manipulated communicative context and sheds light on its historical roots. This unique situation is further complicated by social media platforms, resulting in an increased vulnerability of the public to hate speech, disinformation, and propaganda.</p><p><br></p><p><strong>Marius Dragomir</strong> is the Director of the Center for Media, Data, and Society. In 2015, he founded <a href="https://mpmonitor.org/" rel="noopener noreferrer" target="_blank"><em>MediaPowerMonitor</em></a>, a community of experts in media policy covering trends in regulation, business, and politics that influence journalism. He has spent the past two decades in the media research field, specializing in media and communication regulation, digital media, governing structures of public service media, and media and ownership regulation. Marius is now running a slew of comparative research projects including the <a href="https://cmds.ceu.edu/media-influence-matrix-whats-it-all-about" rel="noopener noreferrer" target="_blank"><em>Media Influence Matrix</em></a>, a global research project looking into power relations and undue influence in news media.<strong> </strong></p><p><strong><span class="ql-cursor">﻿</span>Krisztina Rozgonyi</strong> is a senior scientist at the <a href="https://www.oeaw.ac.at/cmc/the-institute/staff/krisztina-rozgonyi" rel="noopener noreferrer" target="_blank">Institute for Comparative Media and Communication Studies (CMC)</a> of the Austrian Academy of Sciences (ÖAW) and a senior expert on international media, telecommunications, and IP legal and policy. She works with international and European organizations, national governments, and regulators as an advisor on media freedom, spectrum policy, and digital platform governance. Her recent work for the Venice Commission focused on <a href="https://www.venice.coe.int/webforms/documents/default.aspx?pdffile=CDL-LA(2018)002-e" rel="noopener noreferrer" target="_blank">responding to disinformation online</a>. Krisztina Rozgonyi has also engaged recently with the OSCE Representative on Media Freedom as an expert on <a href="https://www.osce.org/representative-on-freedom-of-media/452452" rel="noopener noreferrer" target="_blank">Artificial Intelligence &amp; media pluralism</a>.</p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Sun, 24 Apr 2022 11:49:50 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Function determines form]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/function-determines-form</link>
                <description><![CDATA[<h2>New AI algorithm generates innovative substances on the basis of desired properties</h2><p><strong>Whether in medicine, battery research, or materials science, researchers everywhere are seeking innovative substances. In the process, they can often predict the desired chemical and physical properties in great detail, right down to atomic level. However, the range of all potential chemical compounds alone is so vast that it would take years to find the appropriate substance. An interdisciplinary research group at the Berlin Institute for the Foundations of Learning and Data (BIFOLD) at Technische Universität Berlin has now developed an algorithm which uses AI to implement inverse chemical design and thus generate targeted molecules based on their desired properties. The research group's publication titled "Inverse design of 3d molecular structures with conditional generative neural networks" has now been published in the renowned journal <em>Nature Communications</em>.</strong></p><p>The search for suitable molecules for specific medical or industrial applications is an extremely complex and expensive process. "Hypothetically, there are an incredible number of possible structures. However, only a tiny fraction possesses the specific chemical or physical properties required for a particular application," explains Dr. Kristof Schütt, BIFOLD Junior Fellow at TU Berlin. A wealth of methods has been developed in recent years capable of predicting the chemical properties and energetic states of given substances using AI. But even using these efficient methods, the search for molecules with specific properties has proven difficult in practice, as it is still necessary to search through an overwhelming number of candidates.</p>]]></description>
                <author><![CDATA[Technische Universität Berlin]]></author>
                <pubDate>Sun, 06 Mar 2022 18:42:09 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[On the way to a digitally integrated agriculture?]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/on-the-way-to-a-digitally-integrated-agriculture</link>
                <description><![CDATA[<p>In two newly published reports TAB sheds light on development trends in digital agricultural technologies and analyses the opportunities and risks of a systemically integrated agriculture. The corresponding Policy Briefs are now available in English.</p><p><em>Agriculture is a highly technical economic sector whose production processes are based on the use of natural resources and the keeping of animals. How the increasing demands for climate protection, sustainability and animal welfare can be reconciled with the task of food security is a highly virulent question that has also repeatedly occupied TAB. Digital innovations, which are supposed to enable highly precise, data-driven agricultural production, have raised hopes of being able to better balance this area of tension. As early as 2005, precision agriculture was the subject of a </em><a href="https://www.tab-beim-bundestag.de/english/projects_moderne-agrartechniken-und-produktionsmethoden-oekonomische-und-oekologische-potenziale.php" rel="noopener noreferrer" target="_blank"><em>TAB study</em></a><em> - the </em><a href="https://www.tab-beim-bundestag.de/english/news-2022-02-16-on-the-way-to-a-digitally-networked-agriculture.php#block3082" rel="noopener noreferrer" target="_blank"><em>TAB reports and Policy Briefs no. 31 and no. 32&nbsp;</em></a><em>, which have just been published, provide an updated overview of the state of digitisation in agriculture and the associated social perspectives and challenges.</em></p><p>The digital applications used in livestock and crop production are extremely diverse, ranging from technical hardware such as GPS control, drones, robotics and sensors to smartphone apps and cloud-based farm management software. It is often said that agriculture is a digital pioneer, which may be true if the technology on offer alone is taken as the yardstick. But the extent to which innovative digital technology is actually already being used on farms is still unclear due to a lack of reliable&nbsp;surveys. A significant application hurdle for many farms is the relatively high investment costs, which, in conjunction with economies of scale, mean that the economic use of many digital processes can only be expected for larger farms. In view of the existing structural change in agriculture, an important political task is to ensure equitable access to these technologies. Another controversial issue is who should have access to agricultural data and be able to profit from its commercial use. Many farmers are concerned that the existing monopolization tendencies in the upstream and downstream stages of the value chain (and thus the dependencies of smaller farms) could be further strengthened.</p><p>The central promise of digitization is to be able to control agricultural production processes more efficiently, which in principle can lead to both environmental benefits and operational savings. However, the magnitude of these savings is not easy to determine, as local production conditions have a strong influence on the reduction effects that can be achieved in practice. An important framework condition is also the degree of networking of the individual technologies. The potential of digitization can ultimately only be exploited if agricultural production on farms is "intelligently" networked with upstream and downstream value creation processes (manufacturers of inputs such as seeds and pesticides, food retailers, etc.). However, this is based on prerequisites - such as broadband coverage, provision of open machine interfaces and free availability of geodata - that have not yet been fully realized and make Agriculture 4.0 still appear to be a vision of the future. Options for action such as improving the infrastructural framework conditions, ensuring the participation of smaller family farms and, in general, the data sovereignty of farmers or closing knowledge and research gaps are discussed in Working Report No. 194. The report concludes by stating that a forward-looking design for numerous questions is dependent on answers that point beyond agriculture and concern, for example, competition policy.</p><p><br></p>]]></description>
                <author><![CDATA[KIT - Karlsruher Institut für Technologie - Office of Technology Assessment at the German Bundestag <buero@tab-beim-bundestag.de>]]></author>
                <pubDate>Sun, 10 Jul 2022 12:11:49 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[How to identify bias in Natural Language Processing]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/how-to-identify-bias-in-natural-language-processing</link>
                <description><![CDATA[<p><strong>Why do translation programs or chatbots on our computers often contain discriminatory tendencies towards gender or race? Here is an easy guide to understand how bias in natural language processing works. We explain why sexist technologies like search engines are not just an unfortunate coincidence.</strong></p><p><strong><span class="ql-cursor">﻿</span></strong></p><h3><strong>What is bias in translation programs?</strong></h3><p>Have you ever used machine translation for translating a sentence to Estonian? In some languages, like Estonian, pronouns, and nouns do not indicate gender. When translating to English, the software has to make a choice. Which word becomes male and which female? However, often it is a choice grounded in stereotypes. Is this just a coincidence?</p><p><br></p><p>Authors:</p><p><a href="https://www.hiig.de/freya-hewett/" rel="noopener noreferrer" target="_blank">Freya Hewett</a> Wissenschaftliche Mitarbeiterin: AI &amp; Society Lab</p><p><a href="https://www.hiig.de/sami-nenno/" rel="noopener noreferrer" target="_blank">Sami Nenno</a> Studentischer Mitarbeiter: AI &amp; Society Lab</p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Sun, 26 Dec 2021 12:54:23 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AI and Content Moderation]]></title>
                <link>https://www.beyond-eve.com/en/events/ai-and-content-moderation</link>
                <description><![CDATA[<p>Public pressure on platform companies to more soundly monitor the content on their sites is constantly increasing. To address this, platforms are turning to algorithmic content moderation systems. These systems prioritize content that promises to increase engagement and block content that is deemed illegal or is infringing the platform's own policies and guidelines. But content moderation is a ‘wicked problem’ that raises many questions all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate and deploy this on a global scale? Are platforms overblocking legitimate content, or are they rather failing to limit illegal speech on their sites?&nbsp;</p><p>Within the framework of a ten-week virtual research sprint hosted by the HIIG, thirteen international researchers from various disciplines came together to tackle the challenges posed by automation in content moderation. Their work resulted in policy briefings focused on algorithmic audits and on increasing the transparency and accountability of automated content moderation systems. We warmly invite you to learn more about their findings and attend their output presentations followed by a panel discussion.</p><h4><strong>Agenda</strong></h4><p>Opening remarks on the project and the research sprint by research director Wolfgang Schulz and research lead Alexander Pirang</p><p>Presentations of the research outputs by the sprint fellows:</p><p><br></p><ul><li><strong>David Morar,</strong> guest researcher at <a href="https://datagovhub.elliott.gwu.edu/staff/" rel="noopener noreferrer" target="_blank">George Washington University</a>, Elliott School of International Affairs, USA</li><li><strong>Aline Iramina,</strong> PhD candidate at the <a href="https://www.gla.ac.uk/" rel="noopener noreferrer" target="_blank">University of Glasgow</a>, Great Britain</li><li><strong>Sunimal Mendis, </strong>lecturer at the <a href="https://research.tilburguniversity.edu/en/persons/sunimal-mendis" rel="noopener noreferrer" target="_blank">University of Tilburg</a>, Netherlands</li></ul><p>Followed by a panel discussion moderated by Jennifer Boone with:</p><ul><li><strong>Angelica Fernandez</strong>, fellow of the research sprint and PhD candidate at the University of Luxembourg</li><li><strong>Philipp Otto</strong>, founder and director of the iRights.lab</li><li><strong>Matthias Kettemann</strong>, associated researcher at the HIIG and scientific lead of the research project ”Regulatory Structures and the Emergence of Rules in Online Spaces” at the Leibniz-Institut für Medienforschung I Hans-Bredow Institut&nbsp;</li></ul>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Sun, 22 Feb 2026 13:12:37 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AlgorithmWatch forced to shut down Instagram monitoring project after threats from Facebook]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/algorithmwatch-forced-to-shut-down-instagram-monitoring-project-after-threats-from-facebook</link>
                <description><![CDATA[<p><strong>Digital platforms play an ever-increasing role in structuring and influencing public debate. Civil society watchdogs, researchers and journalists need to be able to hold them to account. But Facebook is increasingly fighting those who try. It shut down New York University’s Ad Observatory last week, and went after AlgorithmWatch, too. The European Parliament and EU Member States must act now to prevent further bullying.</strong></p><p>On 3 March 2020, AlgorithmWatch launched a project to monitor Instagram’s newsfeed algorithm. Volunteers could install a browser add-on that scraped their Instagram newsfeeds. Data was sent to a database we used to study how Instagram prioritizes pictures and videos in a user’s timeline.</p><p>Over the last 14 months, about 1,500 volunteers installed the add-on. With their data, we were able to show that Instagram likely <a href="https://algorithmwatch.org/en/story/instagram-algorithm-nudity/" rel="noopener noreferrer" target="_blank">encouraged</a> content creators to post pictures that fit specific representations of their body, and that politicians were likely to <a href="https://algorithmwatch.org/en/instagram-algorithm-politicians/" rel="noopener noreferrer" target="_blank">reach a larger audience</a> if they abstained from using text in their publications (Facebook denied both claims). Although we could not conduct a precise audit of Instagram’s algorithm, this research is among the most advanced studies ever conducted on the platform. The project was supported by the European Data Journalism Network and by the Dutch foundation SIDN. It was done in partnership with <a href="https://www.mediapart.fr/journal/international/150620/sur-instagram-la-prime-secrete-la-nudite-se-deshabiller-pour-gagner-de-l-audience" rel="noopener noreferrer" target="_blank">Mediapart</a> in France, <a href="https://web.archive.org/web/20210303082809/https:/nos.nl/artikel/2371016-het-algoritme-van-instagram-verslaan-best-lastig-voor-een-politicus.html" rel="noopener noreferrer" target="_blank">NOS</a>, <a href="https://www.groene.nl/artikel/de-poppetjes-zijn-op-instagram-belangrijker-dan-de-inhoud" rel="noopener noreferrer" target="_blank">Groene Amsterdammer</a> and <a href="https://pointer.kro-ncrv.nl/politieke-campagnes-met-veel-selfies-worden-beloond-door-het-instagram-algoritme" rel="noopener noreferrer" target="_blank">Pointer</a> in the Netherlands, <a href="https://www.sueddeutsche.de/wahlfilter" rel="noopener noreferrer" target="_blank">Süddeutsche Zeitung</a> in Germany and was covered by dozens of news outlets over the world.</p><p><em>by Nicolas Kayser-Bril</em></p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Tue, 14 Sep 2021 20:20:41 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Freedom to Deviate in the Algorithmic Society?]]></title>
                <link>https://www.beyond-eve.com/en/events/the-freedom-to-deviate-in-the-algorithmic-society</link>
                <description><![CDATA[<p><strong>Lucia Zedner</strong> (Oxford, All Souls College, Professor of Criminal Justice)</p><p><strong>Bernard Harcourt</strong> (Columbia Law School, Professor of Law and of Political Science)</p><p><strong>Frank Pasquale</strong> (Brooklyn Law School, Professor of Law)</p><p><strong>Christoph Burchard</strong> (Goethe University, Professor of Criminal Justice etc.)</p><p><strong>Indra Spiecker gen. Döhmann</strong> (Goethe University, Professor of Public Law etc.)</p><p><strong>Jürgen Kaube</strong> (Co-Editor at Large, Frankfurter Allgemeine Zeitung)</p><p>Algorithms – and the actors behind them – are surveying and impacting ever more dimensions of our modern lives. They recommend which movies to watch; they calculate risk appropriate credit scores; and they play a role in meting out “just” punishment; to only name a few areas. At the same time, they correct imperfect human decisions and add new informational dimensions to decisions prior&nbsp;impossible. To assess and evaluate the impeding transformations of normative orders in a predictive society, we approach algorithms in light of the juxtaposition of trust and control. Why and under which conditions do – or don’t – we trust algorithms? Indeed, can and should we trust them? Especially because their algorithmic normativity was (not) produced in justificatory fora where trust is brought about in and through social conflicts? But then, how much trust – if any – should algorithms put into us as citizens? For example, do they have to presume us non-dangerous and harmless? Vice versa, how much control do we need to retain over algorithms? And how much control should they exert over us? Can we use algorithms to control the effect of algorithms and thus create a meta-level of trust? Especially in order to negate, or as a matter of fact: to entertain, the freedom to deviate in the algorithmic society? These are but a few of the questions that internationally renowned speakers raise in “Algorithms between Trust and Control”, a lecture series convened by Indra Spiecker gen. Döhmann and Christoph Burchard, and co-organized by the research clusters ConTrust, Normative Orders and ZEVEDI in the line of the Frankfurt Talks on Information Law and under the auspices of Goethe University Frankfurt am Main.</p><p>The lectures will take place via Zoom. Please register to receive the login data.</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Tue, 27 Apr 2021 19:47:44 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[In AI We Trust. Power, Illusion and Control of Predictive Algorithms]]></title>
                <link>https://www.beyond-eve.com/en/events/in-ai-we-trust-power-illusion-and-control-of-predictive-algorithms</link>
                <description><![CDATA[<p>The inaugural Yehuda Elkana Fellow, Helga Nowotny, gave a lecture at the Central European University, in cooperation with the IWM and the Hannah Arendt Center for Politics and Humanities at Bard College.&nbsp;The lecture was preceded by a ceremony to commemorate Yehuda Elkana.</p><p>As we move into a world in which algorithms, robots, and avatars play an ever-increasing role, we need to better understand the nature of AI and its implications for human agency. Helga Nowotny argues that at the heart of our trust in AI lies a paradox: we leverage AI to increase control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future.</p><p>These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.</p><p><a href="https://www.iwm.at/fellow/helga-nowotny" rel="noopener noreferrer" target="_blank">Helga Nowotny</a> is one of the most prominent scholars in science studies worldwide, an area that counted Yehuda Elkana as one of its pioneers and promoters. For several decades Helga Nowotny has been one of the most influential institution builders in European higher education and research. She has worked with European intergovernmental and non-governmental organizations and bodies, such as the European Science Foundation, governmental agencies in several countries of East and West as well as independent organizations and committees of scholars. She has taken part in or directly led, the design and establishment of innovative new institutions, such as the European Research Council, Collegium Budapest or Central European University.</p><p>The Yehuda Elkana Fellow’s activities are held in partnership with Bard College through the Open Society University Network and supported by a grant from the Open Society Foundations.</p>]]></description>
                <author><![CDATA[The Institute for Human Sciences <iwm@iwm.at>]]></author>
                <pubDate>Sun, 03 Oct 2021 16:31:00 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Chances and limits of artificial intelligence]]></title>
                <link>https://www.beyond-eve.com/en/events/chances-and-limits-of-artificial-intelligence</link>
                <description><![CDATA[<p><em>When the computer decides about our insurance coverage</em></p><p>Artificial intelligence is being increasingly leveraged across industries to offer superior products and services and optimize business processes. The proliferation of AI, however, raises a number of ethical questions on data privacy, fairness, bias, and accountability. In the future, will AI decide who is insured and who is not? Who will get which level of insurance coverage? And will vulnerable groups be left behind uninsured? This talk focuses on the risks related to the use of AI in the insurance sector.</p><p><br></p><p><em>This event is organized by the Department of Strategy and Innovation.</em></p>]]></description>
                <author><![CDATA[Wirtschaftsuniversitaet Wien]]></author>
                <pubDate>Sun, 14 Mar 2021 20:41:37 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[From Eugenics to Big Data]]></title>
                <link>https://www.beyond-eve.com/en/events/from-eugenics-to-big-data-a-genealogy-of-criminal-risk-assessment-in-american-law-and-policy</link>
                <description><![CDATA[<p><strong>A Genealogy of Criminal Risk Assessment in American Law and Policy</strong></p><p><strong>Jonathan Simon</strong> (UC Berkeley, Professor of Criminal Justice Law)</p><p>Algorithms – and the actors behind them – are surveying and impacting ever more dimensions of our modern lives. They recommend which movies to watch; they calculate risk-appropriate credit scores; and they play a role in meting out “just” punishment; to only name a few areas. At the same time, they correct imperfect human decisions and add new informational dimensions to decisions prior&nbsp;impossible. To assess and evaluate the impeding transformations of normative orders in a predictive society, we approach algorithms in light of the juxtaposition of trust and control. Why and under which conditions do – or don’t – we trust algorithms? Indeed, can and should we trust them? Especially because their algorithmic normativity was (not) produced in justificatory fora where trust is brought about in and through social conflicts? But then, how much trust – if any – should algorithms put into us as citizens? For example, do they have to presume us non-dangerous and harmless? Vice versa, how much control do we need to retain over algorithms? And how much control should they exert over us? Can we use algorithms to control the effect of algorithms and thus create a meta-level of trust? Especially in order to negate, or as a matter of fact: to entertain, the freedom to deviate in the algorithmic society? These are but a few of the questions that internationally renowned speakers raise in “Algorithms between Trust and Control”, a lecture series convened by Indra Spiecker gen. Döhmann and Christoph Burchard, and co-organized by the research clusters ConTrust, Normative Orders and ZEVEDI in the line of the Frankfurt Talks on Information Law and under the auspices of Goethe University Frankfurt am Main.</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Tue, 27 Apr 2021 19:32:49 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[From Eugenics to Big Data]]></title>
                <link>https://www.beyond-eve.com/en/events/from-eugenics-to-big-data-2</link>
                <description><![CDATA[<p>A Genealogy of Criminal Risk Assessment in American Law and Policy</p><p><strong>Prof. Jonathan Simon</strong> (Professor of Criminal Justice Law, UC Berkeley)</p><p>Convenors: <strong>Prof. Christoph Burchard</strong> (Goethe University, Professor of Criminal Justice, PI of ConTrust and "Normative Orders") and <strong>Prof. Indra Spiecker gen. Döhmann</strong> (Goethe University, Professor of Public Law, PI of ConTrust)</p><p><strong>Presented by:</strong></p><p>Forschungsverbund "Normative Ordnungen" der Goethe-Universität Frankfurt am Main, "ConTrust" - ein Clusterprojekt des Landes Hessen, Frankfurter Gespräche zum Informationsrecht des Lehrstuhls für Öffentliches Recht, Umweltrecht, Informationsrecht und Verwaltungswissenschaften und Zentrum verantwortungsbewusste Digitalisierung</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Sun, 03 Oct 2021 15:58:18 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Never apologise, never explain: (How) can AI rebuild trust after conflicts?]]></title>
                <link>https://www.beyond-eve.com/en/events/ever-apologise-never-explain-how-can-ai-rebuild-trust-after-conflicts</link>
                <description><![CDATA[<p><strong>Never apologise, never explain: (How) can AI rebuild trust after conflicts?</strong></p><p><strong>Burkhart Schäfer</strong> (University of Edinburgh, Professor of Computational Legal Theory)</p><p><br></p><p>Opening Remarks by<strong> Prof. Enrico Schleiff </strong>(President of Goethe University)</p><p>Opening Remarks by <strong>Prof. Rainer Forst</strong> (Speaker of ConTrust and Normative Orders)</p><p>Welcoming Remarks &amp; Comment<strong> Prof. Klaus Günther </strong>(Dean of the Faculty of Law Goethe University)</p><p><br></p><p>Algorithms – and the actors behind them – are surveying and impacting ever more dimensions of our modern lives. They recommend which movies to watch; they calculate risk-appropriate credit scores; and they play a role in meting out “just” punishment; to only name a few areas. At the same time, they correct imperfect human decisions and add new informational dimensions to decisions prior&nbsp;impossible. To assess and evaluate the impeding transformations of normative orders in a predictive society, we approach algorithms in light of the juxtaposition of trust and control. Why and under which conditions do – or don’t – we trust algorithms? Indeed, can and should we trust them? Especially because their algorithmic normativity was (not) produced in justificatory fora where trust is brought about in and through social conflicts? But then, how much trust – if any – should algorithms put into us as citizens? For example, do they have to presume us non-dangerous and harmless? Vice versa, how much control do we need to retain over algorithms? And how much control should they exert over us? Can we use algorithms to control the effect of algorithms and thus create a meta-level of trust? Especially in order to negate, or as a matter of fact: to entertain, the freedom to deviate in the algorithmic society? These are but a few of the questions that internationally renowned speakers raise in “Algorithms between Trust and Control”, a lecture series convened by Indra Spiecker gen. Döhmann and Christoph Burchard, and co-organized by the research clusters ConTrust, Normative Orders and ZEVEDI in the line of the Frankfurt Talks on Information Law and under the auspices of Goethe University Frankfurt am Main.</p>]]></description>
                <author><![CDATA[Normative Orders <office@normativeorders.net>]]></author>
                <pubDate>Tue, 27 Apr 2021 19:33:29 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Computers, Privacy & Data Protection]]></title>
                <link>https://www.beyond-eve.com/en/organisations/computers-privacy-data-protection</link>
                <description><![CDATA[<p>CPDP is a non-profit platform originally founded in 2007 by research groups from the Vrije Universiteit Brussel, the Université de Namur and Tilburg University. The platform was joined in the following years by the Institut National de Recherche en Informatique et en Automatique and the Fraunhofer Institut für System und Innovationsforschung and has now grown into a platform carried by 20 academic centers of excellence from the EU, the US and beyond. As a world-leading multidisciplinary conference CPDP offers the cutting edge in legal, regulatory, academic and technological development in privacy and data protection. Within an atmosphere of independence and mutual respect, CPDP gathers academics, lawyers, practitioners, policy-makers, industry and civil society from all over the world in Brussels, offering them an arena to exchange ideas and discuss the latest emerging issues and trends. This unique multidisciplinary formula has served to make CPDP one of the leading data protection and privacy conferences in Europe and around the world.</p>]]></description>
                <author><![CDATA[Computers, Privacy & Data Protection]]></author>
                <pubDate>Sun, 14 Mar 2021 11:54:21 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[A Call for EU Cyber Diplomacy.]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/a-call-for-eu-cyber-diplomacy</link>
                <description><![CDATA[<p>In December 2020, the European Union (EU) presented its new strategy on cybersecurity with the aim of strengthening Europe’s technological and digital sovereignty. The document lists reform projects that will link cybersecurity more closely with the EU’s new rules on data, algorithms, markets, and Internet services. However, it clearly falls short of the development of a European cyber diplomacy that is committed to both “strategic openness” and the protection of the digital single market. In order to achieve this, EU cyber diplomacy should be made more coherent in its supranational, demo­cratic, and economic/technological dimensions. Germany can make an important con­tribution to that by providing the necessary legal, technical, and financial resources for the European External Action Service (EEAS).</p><p>In the latest issue of <a href="https://www.swp-berlin.org/en/swp-comments-en/" rel="noopener noreferrer" target="_blank"><strong>SWP Comment</strong></a>, <a href="https://leibniz-hbi.de/en/staff/matthias-c-kettemann" rel="noopener noreferrer" target="_blank"><strong>PD Dr. Matthias C. Kettemann</strong></a> and Annegret Bendiek explain why the new EU cybersecurity strategy is too one-sided. The focus should not only be on deterrence and defense, but also on trust and security. They advocate for promoting cyber diplomacy in the European Union.</p><p><strong>Bendiek, A.; Kettemann, M. C. (2021): Revisiting the EU Cybersecurity Strategy: A Call for EU Cyber Diplomacy. In: SWP Comment</strong></p>]]></description>
                <author><![CDATA[The Leibniz Institute for Media Research │ Hans-Bredow-Institut (HBI) <info@hans-bredow-institut.de>]]></author>
                <pubDate>Fri, 11 Jun 2021 22:24:40 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Artificial Intelligence and Discrimination Risks in the Health Sector]]></title>
                <link>https://www.beyond-eve.com/en/events/artificial-intelligence-and-discrimination-risks-in-the-health-sector</link>
                <description><![CDATA[<p>Risks of discrimination related to the use of artificial intelligence (AI) and automated decision-making are already well-documented in several domains, including policing, hiring, loans, and benefit fraud detection. In the past year, a number of cases have indicated that the health and medical sector are not immune to the discriminatory effects of AI. Studies have shown that algorithms widely used across hospitals and health systems to guide patient care, on everything from heart surgery and kidney care, to cesarean birth and prioritizing patients following the backlog of appointments caused by coronavirus, can be racially and culturally biased, and can exacerbate existing health inequalities. </p><p>• In this panel we will discuss the risks of bias, AI-driven discrimination, and unfair differentiation in the health sector. Is there something specific to discrimination risks in the health sector? </p><p>• Are the trade-offs between the benefits and risks of AI different in this sector as opposed to other sectors? </p><p>• Is there a health sector-specific notion of fairness? If so, are sector-specific rules needed for AI in health? </p><p>• Should legal protection against AI-driven discrimination and unfair differentiation be improved and who should attend to this: non-discrimination scholars or bioethicists? </p><p>We aim for a lively discussion panel: no presentations and no slides, but a discussion among the panelists and with the audience. The panel will be made up of experts from different disciplines and backgrounds. </p><p>Moderator: </p><p><strong>Frederik Zuiderveen Borgesius</strong> iHub &amp; iCIS Institute for Computing and Information Sciences, Radboud University Nijmegen (NL) </p><p><br></p><p>Speakers: </p><p><strong>Minna Ruckenstein</strong> Tena Šimonović Einwalter Equinet (HR) </p><p><strong>Carlos Castillo</strong> Universitat Pompeu Fabra (ES) </p><p><strong>Tamar Sharon</strong> iHub, Radboud University Nijmegen (NL)</p>]]></description>
                <author><![CDATA[Computers, Privacy & Data Protection]]></author>
                <pubDate>Sun, 14 Mar 2021 11:51:47 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AI Regulation in Europe & Fundamental Rights]]></title>
                <link>https://www.beyond-eve.com/en/events/ai-regulation-in-europe-fundamental-rights</link>
                <description><![CDATA[<p>If we are building AI for the future we envision, AI applications must serve humanity and respect fundamental rights. Intergovernmental institutions and supranational entities which have published their AI principles in the last couple of years are now facing the challenge of how to regulate the use and effects of AI applications. The biggest risks and impact on rights are considered to be in health, education, security, defense, and public services. In a global landscape where Europe is positioning itself for AI governance leadership and setting the standards in AI for the protection of fundamental rights, the panelists will discuss the impact they strive for and the challenges associated. </p><p>• How does the work of Council of Europe (CAHAI - Ad hoc Committee on Artificial Intelligence), European Commission (AI HLEG - High-Level Expert Group on Artificial Intelligence), complement other AI policy initiatives under OECD, G20 &amp; UNESCO? Are all these initiatives aligned with each other in terms of AI regulation and priorities?</p><p>• How has the experience of COVID-19 changed the perspective, approach, and priorities for the regulation of AI? </p><p>• Is global regulation of high-risk AI applications a possibility in the face of AI race and national strategies? </p><p>• The public sector encapsulates most of the high-risk areas for AI and its impact on fundamental rights. What are the biggest challenges regulating the use of AI by the public sector? </p><p><br></p><p>Moderator: <strong>Merve Hickok </strong>AIethicist.org (US)</p><p><br></p><p>Speakers: </p><p><strong>Peggy Valcke</strong> Council of Europe Ad Hoc Committee on Artificial Intelligence (CAHAI) (INT) </p><p><strong>Friederike Reinhold</strong> AlgorithmWatch (DE) </p><p><strong>Oreste Pollicino</strong> OECD Global Partnership on Artificial Intelligence (IT) </p><p><strong>Alexandra Geese </strong>MEP (EU) Member of the European Parliament for Germany</p>]]></description>
                <author><![CDATA[Computers, Privacy & Data Protection]]></author>
                <pubDate>Sun, 14 Mar 2021 11:40:13 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Enforcing Rights in a Changing World]]></title>
                <link>https://www.beyond-eve.com/en/events/enforcing-rights-in-a-changing-world</link>
                <description><![CDATA[Humanity is going through a historical moment with a global pandemic reshaping our lives and the world we live in. Public health measures combined with tech solutionism is taking surveillance to the next level. Surveillance has become more and more normalized into our lives. Contact tracing apps and wearables are being introduced while governments are discussing immunity passports and long-term border restrictions. The global economy is in the opening stages of a recession which means that the inequalities will grow further in the aftermath of this pandemic. We are moving more of our lives online. Telemedicine and online classrooms are likely to become staples of everyday life. Amidst all these changes, is there also a change in the way we are looking at things and rethinking the possible? What will be the legacy of this pandemic on human rights including privacy and data protection? How do we enforce individual and collective rights in a changing world?

- Innovative and interdisciplinary approaches to enforcement and oversight
- Data protection and inequality
- Digital infrastructures as sites of power
- Health and medtech
- Appification of everything
- Balancing rights in extreme situations
- Public health surveillance
- Surveillance of migration and borders
- Data protection and law enforcement
- New perspectives on privacy and data protection
- The role of privacy and other fundamental rights in data protection
- Right to an effective remedy
- Privacy and data protection in the public sector
- Socio-economic rights and data protection
- Visual and artistic approaches to privacy and data protection
- Algorithmic harms and data justice]]></description>
                <author><![CDATA[Computers, Privacy & Data Protection]]></author>
                <pubDate>Sat, 05 Dec 2020 21:45:17 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Information Technology & Innovation foundation ITIF]]></title>
                <link>https://www.beyond-eve.com/en/organisations/information-technology-innovation-foundation-itif</link>
                <description><![CDATA[<p>As technological innovation transforms the global economy and society, policymakers often lack the specialized knowledge and expert perspective necessary to evaluate and respond to fast-moving issues and circumstances. What should they do to capitalize on new opportunities, overcome challenges, and avoid potential pitfalls? The Information Technology and Innovation Foundation (ITIF) exists to provide answers and point the way forward.</p><p>Founded in 2006, ITIF is an independent 501(c)(3) nonprofit, nonpartisan research and educational institute—a think tank. Its mission is to formulate, evaluate, and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. ITIF’s goal is to provide policymakers around the world with high-quality information, analysis, and recommendations they can trust. To that end, ITIF adheres to a high standard of research integrity with an internal code of ethics grounded in analytical rigor, policy pragmatism, and independence from external direction or bias. </p><p><strong>Focus</strong></p><p>ITIF focuses on a <a href="http://www.itif.org/issues" rel="noopener noreferrer" target="_blank">host of critical issues</a> at the intersection of technological innovation and public policy—including economic issues related to innovation, competitiveness, trade, and globalization; and technology-related issues in the areas of information technology and data, broadband telecommunications, advanced manufacturing, life sciences, agricultural biotechnology, and clean energy. (<a href="https://www.itif.org/policy-goals-and-values" rel="noopener noreferrer" target="_blank">Read more about ITIF’s policy goals and values</a>.)</p><p>Ongoing research programs and educational activities include:</p><ul><li><strong>Setting the policy agenda</strong> on technology, innovation, and global competition issues by producing <a href="http://www.itif.org/publications/reports" rel="noopener noreferrer" target="_blank">original research reports</a> and <a href="http://www.itif.org/publications/blogs-and-op-eds" rel="noopener noreferrer" target="_blank">analytical commentary</a>;</li><li><strong>Shaping public debate</strong> by hosting <a href="http://www.itif.org/events" rel="noopener noreferrer" target="_blank">events</a>, giving <a href="http://www.itif.org/events/presentations" rel="noopener noreferrer" target="_blank">speeches and presentations</a>, providing <a href="http://www.itif.org/publications/testimony-filings" rel="noopener noreferrer" target="_blank">official testimony</a>, publishing <a href="https://www.itif.org/publications/articles-op-eds-blogs" rel="noopener noreferrer" target="_blank">op-eds</a>, and serving as expert issue analysts in the <a href="http://www.itif.org/news-room/news-clips" rel="noopener noreferrer" target="_blank">news media</a>; and</li><li><strong>Advising policymakers</strong> through direct interaction in Washington, D.C., and other state, national, and regional capitals around the world.</li></ul><p>On the strength and influence of this work, the University of Pennsylvania has <a href="https://repository.upenn.edu/think_tanks/17/" rel="noopener noreferrer" target="_blank">ranked</a> ITIF as the world’s leading think tank for science and technology policy, and one of the top 27 U.S. think tanks overall.</p><p><strong>Expertise</strong></p><p>ITIF is led by its president and founder, <a href="http://www.itif.org/person/robert-d-atkinson" rel="noopener noreferrer" target="_blank">Robert D. Atkinson</a>, an internationally recognized policy scholar and widely published author whom The New Republic has named one of the “three most important thinkers about innovation,” Washingtonian Magazine has called a “Tech Titan,” and Government Technology Magazine has judged to be one of the 25 top “Doers, Dreamers and Drivers of Information Technology.” Under Atkinson, <a href="http://www.itif.org/people/itif-staff" rel="noopener noreferrer" target="_blank">ITIF’s team of policy analysts and fellows</a> includes authors and recognized experts in the fields of economics, tax policy, trade, telecommunications, privacy, cybersecurity, and life sciences, among many others.</p><p>ITIF is home to the highly regarded <a href="http://www.datainnovation.org/" rel="noopener noreferrer" target="_blank">Center for Data Innovation</a>, which develops and promotes policy ideas to capitalize on the tremendous economic and social benefits that data-driven innovation can offer. ITIF also launched—and spearheads—the <a href="http://gtipa.org/" rel="noopener noreferrer" target="_blank">Global Trade and Innovation Policy Alliance</a>, an international network of think tanks that conduct evidence-based research into policies that can foster greater trade liberalization, curb “innovation mercantilism,” and encourage governments to play proactive roles in spurring innovation and productivity.</p>]]></description>
                <author><![CDATA[Information Technology & Innovation foundation ITIF]]></author>
                <pubDate>Sat, 02 Jan 2021 12:00:15 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Policy Lab Digital, Work & Society]]></title>
                <link>https://www.beyond-eve.com/en/organisations/policy-lab-digital-work-society</link>
                <description><![CDATA[<p>With the Policy Lab Digital, Work &amp; Society, the Federal Ministry of Labour and Social Affairs established a new, interdisciplinary and agile organisational unit that combines the functions and working practices of a traditional think tank and those of a modern future lab. The Policy Lab began its journey in October, 2018. The aim is to identify new areas of activity that, for the Federal Ministry of Labour and Social Affairs, have emerged due to digitalisation and other trends at an early stage, consider the labour market to a greater extent in a social context and develop new solutions for the labour market of the future.</p><p>The Policy Lab Digital, Work &amp; Society will combine projects and processes dealing with the digital transformation at the BMAS and use them to create a bigger picture of the labour market of the future. It will thus provide a central supporting function for academia, those people involved at a practical level and social partners. The Policy Lab will follow an approach where digitalisation is considered in a consistent and systematic manner based on its impact on people and their social and civic relationships. Especially against the background of a constantly changing digital economy, this approach will be based on the firm conviction that employment relationships can be oriented to the needs of employees and the requirements of good working practices.</p>]]></description>
                <author><![CDATA[Policy Lab Digital, Work & Society]]></author>
                <pubDate>Thu, 24 Dec 2020 14:34:19 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Technology and Race: The role of big tech]]></title>
                <link>https://www.beyond-eve.com/en/events/technology-and-race-the-role-of-big-tech</link>
                <description><![CDATA[<p>Earlier this month, Google fired one of its most prominent Black researchers over an email she sent criticizing the company’s efforts in both hiring a diverse workforce and removing biases that have been built into its artificial intelligence technology. Her dismissal has sparked rage both internally at Google and around the world as yet another example of Big Tech’s failures to adequately address diversity, equity&nbsp;and inclusion. The year 2020 has been a time of reckoning for this country around the impact of systemic racism on the health, safety, mobility&nbsp;and socioeconomic status of Black and brown people in the United States.</p><p>While technology has certainly helped jump start movements like Black Lives Matter, critics say it has also played a role in not just amplifying racial tensions but has also actively reinforced systemic racism through the lack of diversity in its creators and the inherent biases within its algorithms themselves. Jim Steyer, CEO and founder of Common Sense Media and author of the book&nbsp;<em>Which Side of History: How Technology Is Reshaping Technology and Our Lives</em>, has devoted a section of the book to exploring just how entangled Silicon Valley has become in our national history of racism and inequality.</p><p>In this program, Steyer will be joined by book contributors Ellen Pao, CEO of Project Incude, and Theodore Shaw, director of the Center for Civil Rights at the University of North Carolina School of Law. They&nbsp;will discuss technology’s role in exacerbating racial inequality in the United States&nbsp;and the leadership role Big Tech needs to take in order to move the nation forward. We’ll explore how racial inequality is baked into the very fabric of technological platforms, and how diversity, inclusion&nbsp;and equity might hold the solution to changing its course.</p>]]></description>
                <author><![CDATA[Commonwealth Club]]></author>
                <pubDate>Mon, 21 Dec 2020 12:28:13 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Iyad Rahwan: How to trust machines?]]></title>
                <link>https://www.beyond-eve.com/en/events/iyad-rahwan-how-to-trust-machines</link>
                <description><![CDATA[<p>Machine intelligence plays a growing role in our lives. Today, machines recommend things to us, such as news, music, and household products. They trade in our stock markets and optimise our transportation and logistics. They are also beginning to drive us around, play with our children, diagnose our health. How do we ensure that these machines will be trustworthy? This lecture explores various psychological, social, cultural, and political factors that shape our trust in machines and pleads for the accomplishment of the challenges of the information revolution not only to be understood as a problem of computer science.</p><p>&nbsp;</p><p><strong>Iyad Rahwan</strong> is director of the Max Planck Institute for Human Development in Berlin, where he founded and leads the Center for Humans and Machines. He is also an honorary professor of Electrical Engineering and Computer Science at the Technische Universität Berlin. Until June 2020, he was an Associate Professor of Media Arts &amp; Sciences at the Massachusetts Institute of Technology (MIT). Rahwan holds a PhD in Information Systems (Artificial Intelligence) from the University of Melbourne, Australia. His work lies at the intersection of computer science and human behavior, with a focus on collective intelligence, large-scale cooperation, and the societal impact of artificial intelligence and social media. In addition to various journal articles, Iyad Rahwan is co-author of the study <em>Reply to: Life and death decisions of autonomous vehicles</em> and together with Jean-François Bonnefon he published the paper <em>Machine Thinking, Fast and Slow</em> (both 2020).</p><p>&nbsp;</p><p><strong>The event will be held in English. </strong></p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Fri, 18 Dec 2020 11:13:55 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Heinz Nixdorf MuseumsForum]]></title>
                <link>https://www.beyond-eve.com/en/organisations/heinz-nixdorf-museumsforum</link>
                <description><![CDATA[With its exhibitions and events, the Heinz Nixdorf MuseumsForum seeks to keep people informed and help them find their place in our modern information society.

The starting point is a portrayal of the cultural history of information technology in a journey through time covering five thousand years, from the origins of arithmetic and writing to the 21st century.
 
The experiences provided by the exhibition are supplemented by events which show the impact of information technology and pick up the challenges of our information age - globalization, networking, and the spread of information and communication technology. HNF focuses on people and their relationship to technology and society, with a view to assisting them in their striving for a sense of community, meaning and personal development.

HNF's objectives are to impart knowledge to help people understand developments in the past, to provide a stimulus for structuring the present, and to suggest visions for coping with the future of the information age.

In embracing these objectives HNF is dedicated to Heinz Nixdorf, who died in 1986. This computer pioneer and innovative, public-spirited entrepreneur wanted information technology to be a benefit to mankind. He had the idea of founding a museum to show the story of computing to people, especially the young, and collected over 1,000 historical objects for this purpose.
 
Stiftung Westfalen, a foundation that he established, has made his dream come true - using his collection and adding contemporary aspects in this new combination of a museum and a forum.

OPENING TIMES
Tuesday to Friday 9:00 a.m. to 6:00 p.m.
Saturday, Sunday 10:00 a.m. to 6:00 p.m. 
Closed Monday]]></description>
                <author><![CDATA[Heinz Nixdorf MuseumsForum <service@hnf.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 17:08:30 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG) ]]></title>
                <link>https://www.beyond-eve.com/en/organisations/alexander-von-humboldt-institute-for-internet-and-society-hiig</link>
                <description><![CDATA[<p>The Alexander von Humboldt Institute for Internet and Society (HIIG) was founded in 2011 to research the development of the internet from a societal perspective and better understand the digitalisation of all spheres of life. As the first institute in Germany with a focus on internet and society, HIIG has established an understanding that centres on the deep interconnectedness of digital innovations and societal processes.&nbsp;The development of technology reflects norms, values and networks of interests, and conversely, technologies, once established, influence social values.</p><p><br></p><h3>We explore new models of thought and action</h3><p>Modern societies are based on ever-changing sets of norms, procedures and structures that are intended to enable free and democratic coexistence. In times of fundamental social, economic and technical transformation, however, some of these institutions are reaching the limits of their ability to change and "broken concepts" are emerging. This term refers to ways of thinking, patterns of action or explanatory models that are so deeply connected to their previous context that they now seem to have come from a different era and need to be rethought. We want to research such broken concepts – such as the once-meaningful distinction between the offline and online world – and help overcome them by offering new models of thought and action.&nbsp;</p><p>By doing so, we are actively shaping the society of the future. Based on the scientific competences brought together at the institute and its dedication to interdisciplinarity, HIIG can engage with current topics such as the "platformisation" of the economy and society or the use of artificial intelligence and question the underlying concepts, structures and norms.</p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Fri, 11 Dec 2020 16:46:20 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Berkman Klein Center for Internet & Society]]></title>
                <link>https://www.beyond-eve.com/en/organisations/harvard-university-berkman-klein-center-for-internet-society</link>
                <description><![CDATA[<p>The Berkman Klein Center's mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions. We are a research center, premised on the observation that what we seek to learn is not already recorded. Our method is to build out into cyberspace, record data as we go, self-study, and share. Our mode is entrepreneurial nonprofit. </p><p><br></p><p><strong>The Center in Brief</strong></p><p>We bring together the sharpest, most thoughtful people from around the globe to tackle the biggest challenges presented by the Internet. As an interdisciplinary, University-wide center with a global scope, we have an unparalleled track record of leveraging exceptional academic rigor to produce real- world impact. We pride ourselves on pushing the edges of scholarly research, building tools and platforms that break new ground, and fostering active networks across diverse communities. United by our commitment to the public interest, our vibrant, collaborative community of independent thinkers represents a wide range of philosophies and disciplines, making us a unique home for open-minded inquiry, debate, and experimentation.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sun, 06 Dec 2020 12:02:59 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[IE University - The Center for the Governance of Change (CGC)]]></title>
                <link>https://www.beyond-eve.com/en/organisations/ie-university-the-center-for-the-governance-of-change-cgc</link>
                <description><![CDATA[<strong>The Center for the Governance of Change (CGC) is an applied-research, educational institution based at IE University that studies the political, economic, and societal implications of the current technological revolution and advances solutions to overcome its unwanted effects.</strong>

The CGC produces pioneering impact-oriented research that cuts across disciplines and methodologies to unveil the complexity of emerging technologies such as Artificial Intelligence, Big Data, Blockchain, and Robotics, and explore its potential threats and contributions to society.

Moreover, the CGC also runs a number of executive programs on emerging tech for public institutions and companies interested in expanding their understanding of disruptive trends, and a series of outreach activities aimed at improving the general public’s awareness and agency over the coming changes. All this for one purpose: to help building a more prosperous and sustainable society for all.]]></description>
                <author><![CDATA[IE University - The Center for the Governance of Change (CGC) <cgc@ie.edu>]]></author>
                <pubDate>Fri, 04 Dec 2020 16:12:01 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Oxford Internet Institute]]></title>
                <link>https://www.beyond-eve.com/en/organisations/university-of-oxford-the-oxford-internet-institute</link>
                <description><![CDATA[<strong>The Oxford Internet Institute</strong> was founded as a full department of the University of Oxford in 2001. The idea for an Oxford research centre focusing on the societal opportunities and challenges posed by rapidly-developing Internet technologies was first posed by Dr Andrew Graham, then Master-Elect of Balliol College, and Derek Wyatt, then MP for Sittingbourne and Sheppey, supported by then Oxford University Vice Chancellor, Colin Lucas.

Financial support for the department’s establishment was provided by Dame Stephanie Shirley, founder of the computer software company Xansa, with some match funding provided by the Higher Education Funding Council for England.

Since 2006, the department has offered graduate degrees, marking its transition to a research-led, teaching department. Following the success of the DPhil in Information, Communication and the Social Sciences, the department taught its first Masters, the MSc in Social Science of the Internet, in 2009. This programme recently celebrated it’s ten-year anniversary. More recently MSc and DPhil programmes in Social Data Science have been launched, widening OII’s intellectual appeal to students.]]></description>
                <author><![CDATA[The Oxford Internet Institute <enquiries@oii.ox.ac.uk>]]></author>
                <pubDate>Fri, 04 Dec 2020 16:02:37 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Informatics Europe]]></title>
                <link>https://www.beyond-eve.com/en/organisations/informatics-europe</link>
                <description><![CDATA[<p>Informatics Europe represents the academic and research community in Informatics in Europe and neighbouring countries. It brings together university departments and research laboratories, creating a strong common voice to promote, shape and stimulate quality research, education, and knowledge transfer in Informatics in Europe. Informatics Europe is a non-profit membership association based in Zurich, Switzerland. </p><p><br></p><p><strong>Our mission:</strong> Foster research, education, and knowledge transfer in Informatics </p><p><strong>Our goals:</strong> </p><ul><li>Foster quality of research in Informatics. </li><li>Foster quality of education in Informatics. </li><li>Foster knowledge transfer between academia and industry and society. </li><li>Engage with society on the nature and impact of Informatics. </li><li>Promote quality standards and best practices in research, education, and knowledge transfer. </li><li>Foster relations between academia and government and public institutions. </li><li>Foster co-operation with organisations having complementary goals.</li></ul>]]></description>
                <author><![CDATA[Informatics Europe <administration@informatics-europe.org>]]></author>
                <pubDate>Fri, 04 Dec 2020 15:07:24 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Research for the Networked Society]]></title>
                <link>https://www.beyond-eve.com/en/organisations/wissenschaftszentrum-berlin-fur-sozialforschung-ggmbh-weizenbaum-instituts-fur-die-vernetzte-gesellschaft</link>
                <description><![CDATA[The Weizenbaum Institute investigates the current changes in all aspects of society occurring in response to digitalisation. Our goals are to develop a comprehensive understanding of these changes based on rigorous academic analysis and to offer informed strategies to address them at a political and economic level.

The Institute’s core objective is to conduct outstanding, interdisciplinary, and problem-oriented basic research, which at the same time drives application-oriented projects and, moreover, stimulates the formulation of new research questions. To do justice to the interplay between technology and society, the principle of interdisciplinarity will be implemented not only selectively, but in all research areas and projects. For the first time, the Institute will unite all relevant disciplines in a single research program and develop a holistic perspective on the process of digitalisation in society. A central social challenge is to ensure democratic self-determination and participation under the conditions of increasing digitalisation and automation. Accordingly, the Institute’s overarching question is the following:

<strong>How can the goals of individual and social self-determination be achieved in a world characterised by digitally mediated processes of transformation and demarcation, and which framework conditions and resources are necessary for their realization?</strong>

Here, self-determination is understood as the individual and collective competency to recognise, use, and design the scope of action. It is a fundamental prerequisite for the democratic organisation of society and a competitive market economy.]]></description>
                <author><![CDATA[Research for the Networked Society]]></author>
                <pubDate>Fri, 04 Dec 2020 14:44:35 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Hochschule RheinMain - IMPACT RheinMain]]></title>
                <link>https://www.beyond-eve.com/en/organisations/hochschule-rheinmain-impact-rheinmain</link>
                <description><![CDATA[With its application IMPACT RheinMain, the Rhine-Main University of Applied Sciences has been selected for funding in the first round of the federal-state initiative "Innovative Hochschule". Among the recipients are 35 universities of applied sciences, a college of art and music, and twelve universities and colleges of education. The "Innovative University" federal-state initiative - a kind of excellence initiative for universities of applied sciences - is intended to support universities in their efforts to further distinguish themselves in the areas of transfer and innovation and to intensify their strategic role in the regional innovation system.

The strategy for transferring scientific findings into practice is essentially based on the three profile-building research priorities of the Rhine-Main University of Applied Sciences: professionalism in social work, smart systems for people and technology and engineering 4.0 and their interfaces "Smart Energy", "Smart Home" and "Smart Mobility". The university pays particular attention to interdisciplinary cooperation between the individual disciplines. The aim of the IMPACT RheinMain project is to initiate and implement innovative projects from the fields with cooperation partners from industry and civil society.]]></description>
                <author><![CDATA[Hochschule RheinMain - IMPACT RheinMain]]></author>
                <pubDate>Fri, 04 Dec 2020 14:22:19 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Deutsches Elektronen-Synchrotron DESY]]></title>
                <link>https://www.beyond-eve.com/en/organisations/deutsches-elektronen-synchrotron-desy</link>
                <description><![CDATA[<p>DESY is one of the world’s leading accelerator centres. Researchers use the large-scale facilities at DESY to explore the microcosm in all its variety – from the interactions of tiny elementary particles and the behaviour of new types of nanomaterials to biomolecular processes that are essential to life. The accelerators and detectors that DESY develops and builds are unique research tools. The facilities generate the world’s most intense X-ray light, accelerate particles to record energies and open completely new windows onto the universe.</p><p>That makes DESY not only a magnet for more than 3000 guest researchers from over 40 countries every year, but also a coveted partner for national and international cooperations. Committed young researchers find an exciting interdisciplinary setting at DESY. The research centre offers specialized training for a large number of professions. DESY cooperates with industry and business to promote new technologies that will benefit society and encourage innovations. This also benefits the metropolitan regions of the two DESY locations, Hamburg and Zeuthen near Berlin.</p>]]></description>
                <author><![CDATA[Deutsches Elektronen-Synchrotron DESY]]></author>
                <pubDate>Fri, 04 Dec 2020 12:26:57 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[New York University - AI Now Institute]]></title>
                <link>https://www.beyond-eve.com/en/organisations/new-york-university-ai-now-institute</link>
                <description><![CDATA[Artificial Intelligence systems are being applied to many arenas of human life – across major sectors such as education, health care, criminal justice, housing, and employment – influencing significant decisions that impact individuals, populations, and national agendas.

But the vast majority of AI systems and related technologies are being put in place with minimal oversight, few accountability mechanisms, and little research into their broader implications. Currently there are no agreed-upon methods to measure and assess the social implications of AI, even as these systems are being rapidly integrated into core social institutions.

To ensure that AI systems are sensitive and responsive to the complex social domains in which they are applied, we will need to develop new ways to measure, audit, analyze, and improve them.

The AI Now Institute produces interdisciplinary research on the social implications of artificial intelligence and acts as a hub for the emerging field focused on these issues.

Currently, our research focuses on four key domains: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure.

Founded in 2017 by Kate Crawford and Meredith Whittaker, AI Now is housed at New York University, where it fosters vibrant intellectual engagement and collaboration across the University and beyond.]]></description>
                <author><![CDATA[New York University - AI Now Institute]]></author>
                <pubDate>Thu, 03 Dec 2020 14:29:27 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[When scholars sprint, bad algorithms are on the run]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/when-scholars-sprint-bad-algorithms-are-on-the-run</link>
                <description><![CDATA[<p><em>The first research sprint of the </em><a href="https://www.hiig.de/en/project/the-ethics-of-digitalisation/" rel="noopener noreferrer" target="_blank"><em>Ethics of Digitalisation</em></a><em> project financed by the Stiftung Mercator reached the finishing line. Thirteen international fellows tackled pressing issues concerning the use of AI in content moderation. Looking back at ten intense weeks of interdisciplinary research, we share highlights and key outcomes.</em></p><p>In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted&nbsp;material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?</p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Sun, 03 Jan 2021 16:47:28 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Fraunhofer-Gesellschaft - Institute Computer Graphics Research]]></title>
                <link>https://www.beyond-eve.com/en/organisations/fraunhofer-gesellschaft-institute-computer-graphics-research</link>
                <description><![CDATA[Fraunhofer IGD is the international leading research institution for applied visual computing — image- and model-based information technology that combines computer graphics and computer vision. In simple terms, it is the ability to turn information into images and to extract information from images. All technological solutions by Fraunhofer IGD and its partners are based on visual computing.

In computer graphics, people generate, edit, and process images, graphs, and multi-dimensional models in a computer-aided manner. Examples are applications of virtual and simulated reality.

Computer vision is the discipline that teaches computers how to »see«. In the process, a machine sees its environment by means of a camera and processes information using software. Typical applications can be found in the field of Augmented Reality.]]></description>
                <author><![CDATA[Fraunhofer-Gesellschaft - Institute Computer Graphics Research <info@igd.fraunhofer.de>]]></author>
                <pubDate>Wed, 02 Dec 2020 19:32:20 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AlgorithmWatch gGmbH]]></title>
                <link>https://www.beyond-eve.com/en/organisations/algorithmwatch-ggmbh</link>
                <description><![CDATA[<strong>Mission Statement</strong>
The more technology develops, the more complex it becomes. AlgorithmWatch believes that complexity must not mean incomprehensibility.
AlgorithmWatch is a non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.

<strong>HOW DO WE WORK?</strong>

<strong>Watch</strong>
AlgorithmWatch analyses the effects of algorithmic decision making processes on human behaviour and points out ethical conflicts.

<strong>Explain</strong>
AlgorithmWatch explains the characteristics and effects of complex algorithmic decision making processes to a general public.

<strong>Network</strong>
AlgorithmWatch is a platform linking experts from different cultures and disciplines focused on the study of algorithmic decision making processes and their social impact.

<strong>Engage</strong>
In order to maximise the benefits of algorithmic decision making processes for society, AlgorithmWatch assists in developing ideas and strategies to achieve intelligibility of these processes – with a mix of technologies, regulation, and suitable oversight institutions.]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Wed, 02 Dec 2020 19:09:01 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Code Girls]]></title>
                <link>https://www.beyond-eve.com/en/organisations/code-girls</link>
                <description><![CDATA[Schon mal versucht eine Fremdsprache zu lernen? Und von der Fülle an Vokabeln, Grammatik und Regeln total erschlagen gewesen? Genau so geht es euch vielleicht, wenn ihr euch das erste Mal mit einer Programmiersprache beschäftigt. Genau wie eine Fremdsprache lernt man HTML, Ruby on Rails oder JavaScript deswegen am besten in der Gruppe. Und am allerliebsten mit uns!

Erst mal gilt es, die Basics zu lernen - damit lassen sich dann schon die ersten Sätze bilden. Und keine Angst, hier geht es nicht darum, so schnell wie möglich perfekten Code zu schreiben. Wir möchten, dass ihr Spaß an der Sache habt, dass ihr Fragen stellt, dass ihr neugierig seid und aktiv werdet.

Die Code Girls treffen sich alle 14 Tage im Social Impact Lab. Kommt vorbei, wenn ihr Hilfestellung beim Erstellen eurer Homepage oder Blog braucht, eine spezielle Fragen zum Theme Web-Programmierung habt, bei den Code Girls mitmischen oder ganz einfach wissen wollt: Wie fange ich eigentlich an programmieren zu lernen?]]></description>
                <author><![CDATA[Code Girls <info@codegirls.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:34:37 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Lise-Meitner-Gesellschaft e.V.]]></title>
                <link>https://www.beyond-eve.com/en/organisations/lise-meitner-gesellschaft-ev</link>
                <description><![CDATA[Die Lise-Meitner-Gesellschaft ist ein eingetragener Verein, dessen Ziel Chancengleichheit und die Gleichstellung von Frauen* im MINT Bereich inner- und außerhalb der akademischen Laufbahn ist. 

Noch immer sind Frauen* im MINT Bereich unterrepräsentiert und erfahren oft kaum sichtbar und ungewollt Benachteiligung und Diskriminierung aufgrund unbewusster Voreingenommenheiten.

Wir sind eine von jungen Wissenschaftler*innen gegründete Gruppe, die ein breites Bewusstsein für diese Mechanismen, speziell in den Naturwissenschaften und der Mathematik, schaffen will und gleichzeitig Frauen* dazu ermutigen möchte, selbstbewusst ihre Ziele zu definieren und zu verwirklichen.]]></description>
                <author><![CDATA[Lise-Meitner-Gesellschaft e.V. <kontakt@lise-meitner-gesellschaft.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:34:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Fachgruppe Frauen in der Gesellschaft für Informatik (GI) - IT-Frauen im Rhein-Neckar-Dreieck]]></title>
                <link>https://www.beyond-eve.com/en/organisations/fachgruppe-frauen-in-der-gesellschaft-fur-informatik-gi-it-frauen-im-rhein-neckar-dreieck</link>
                <description><![CDATA[Nach dem Vorbild anderer Regionen, wie z.B. München, wollen wir im Rhein-Neckar-Dreieck ein regelmäßiges Treffen von IT-Frauen organisieren. Im Vordergrund steht der fachliche und berufsbezogene Austausch: in informellen Gesprächen, wie auch bei organisierten Themenabenden mit eingeladenen Vortragenden oder moderierter Diskussion.

Dazu gehört auch die Vernetzung, regional wie auch überregional im Rahmen der Fachgruppentreffen, die seit fast 20 Jahren halbjährlich stattfinden.]]></description>
                <author><![CDATA[Fachgruppe Frauen in der Gesellschaft für Informatik (GI) - IT-Frauen im Rhein-Neckar-Dreieck]]></author>
                <pubDate>Tue, 01 Dec 2020 15:34:18 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[YAPILI]]></title>
                <link>https://www.beyond-eve.com/en/organisations/yapili</link>
                <description><![CDATA[YAPILI delivers health@hand – offering new opportunities for many Africans to connect to local and western health professionals in an efficient and confidential way. In the societies, where professional health advice is hard & expensive to get, YAPILI offers affordable, anonymous and secure channel to seek medical care in case of pregnancy & family planning, diabetes & hypertension, HIV & sexual health, mental health and generic health questions. 

YAPILI was started in November 2014 by a group of four young entrepreneurs who met in East Africa through the startup incubator, Ampion. Eventually our team grew to include skills ranging from front-end development to public health and policy expertise.]]></description>
                <author><![CDATA[YAPILI <enya@yapili.com>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:34:15 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Gesellschaft für Informatik e. V. und Open Knowledge Foundation Deutschland e. V. - Turing-Bus]]></title>
                <link>https://www.beyond-eve.com/en/organisations/gesellschaft-fur-informatik-e-v-und-open-knowledge-foundation-deutschland-e-v-turing-bus</link>
                <description><![CDATA[Der Turing-Bus ist ein mobiles Bildungsangebot der Open Knowledge Foundation Deutschland und der Gesellschaft für Informatik im <strong>Wissenschaftsjahr 2018 - Arbeitswelten der Zukunft</strong> für Schulen, Jugendclubs und lokale Institutionen.

Der Bus möchte die Rolle von Digitalisierung und Technologie für Beruf und Gesellschaft mit Workshops, Vorträgen und Hands-on-Sessions ergründen, diskutieren und kritisch hinterfragen. Die Zielgruppe des Projektes sind Jugendliche und junge Erwachsene im Alter zwischen 15 und 25 Jahren.]]></description>
                <author><![CDATA[Gesellschaft für Informatik e. V. und Open Knowledge Foundation Deutschland e. V. - Turing-Bus <info@turing-bus.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:51 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Kompetenzzentrum Technik-Diversity-Chancengleichheit - komm mach MINT]]></title>
                <link>https://www.beyond-eve.com/en/organisations/kompetenzzentrum-technik-diversity-chancengleichheit-komm-mach-mint</link>
                <description><![CDATA[**Der Nationale Pakt für Frauen in MINT-Berufen -„Komm, mach MINT.“ ist die einzige bundesweite Netzwerk-Initiative, die Mädchen und Frauen für MINT-Studiengänge und -Berufe begeistert. 
Sie vernetzt bereits über 260 Partner aus Politik, Wirtschaft, Wissenschaft, Sozialpartnern, Medien und Verbänden und setzt den Dialog zum Thema Frauen und MINT in innovative Maßnahmen um.**

Ziel der Initiative ist das Potential von Frauen für naturwissenschaftlich-technische Berufe angesichts des sich abzeichnenden Fachkräftemangels zu nutzen, im einzelnen:

- ein realistisches Bild der ingenieur- und naturwissenschaftlichen Berufe zu vermitteln und die Chancen für Frauen in diesen Feldern aufzuzeigen,
- junge Frauen für naturwissenschaftlich-technische Studiengänge zu begeistern,
- Hochschulabsolventinnen für Karrieren in technischen Unternehmen und Forschungseinrichtungen zu gewinnen.

Die Ziele sind in einem Memorandum festgehalten, das von den Partnern unterzeichnet wird. (Der Pakt ist offen für weitere Partner, die sich für die Ziele des Paktes einsetzen und aktiv mitwirken wollen, um mehr Frauen für MINT-Berufe zu gewinnen. Es bestehen für jeden Partner individuelle Optionen, sich am nationalen Pakt zu beteiligen.)

Die Zielgruppe sind junge Frauen an den Schnittstellen zwischen Schule und Studium sowie zwischen Hochschule und Beruf.

Zum Erreichen dieser Ziele ist ein breites Bündnis aus Bundesregierung, Bundesagentur für Arbeit, Unternehmen, Verbänden, Gewerkschaften, Hochschulen, Forschungs- und Wissenschaftseinrichtungen, Frauen-Technik-Netzen, Medien und öffentlichen Einrichtungen erforderlich. Expertinnen und Experten der Partner sind in die Planung und Ausgestaltung einbezogen.

Der Pakt soll für die Partner offen gestaltet werden, d.h.

- bestehende Projekte und Initiativen können eingebracht, gebündelt und durch gemeinsame Öffentlichkeitsarbeit sichtbar gemacht werden.
- Transfer erfolgreicher Maßnahmen in andere Regionen und Institutionen soll ermöglicht werden.
- Neue Aktivitäten der Partner sollen angestoßen werden. Die geplanten Maßnahmen sollen den jungen Frauen u. a. Entscheidungshilfen für den Studieneinstieg geben, frühzeitige Kontakte mit Vorbildfrauen ermöglichen und mehr Selbstvertrauen in die eigene Leistungsfähigkeit für ein technisches Studium bewirken.]]></description>
                <author><![CDATA[Kompetenzzentrum Technik-Diversity-Chancengleichheit - komm mach MINT <haaf@komm-mach-mint.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:40 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Stiftung Neue Verantwortung e. V.]]></title>
                <link>https://www.beyond-eve.com/en/organisations/stiftung-neue-verantwortung-e-v</link>
                <description><![CDATA[<strong>The Stiftung Neue Verantwortung</strong> (SNV) is an independent think tank that develops concrete ideas as to how German politics can shape technological change in society, the economy and the state. In order to guarantee the independence of its work, the organisation adopted a concept of mixed funding sources that include foundations, public funds and businesses.

Issues of digital infrastructure, the changing pattern of employment, IT security or internet surveillance now affect key areas of economic and social policy, domestic security or the protection of the fundamental rights of individuals. The experts of the SNV formulate analyses, develop policy proposals and organise conferences that address these issues and further subject areas.

Many excellent research institutes and think tanks already contribute to the fields of foreign policy, economic policy or environmental policy in Germany. Issues related to new technologies however lack comparable expert organisations that focus on current politics and social debates. The SNV wants to fill this gap in the landscape of German institutes and think tanks. This think tank seeks to provide a focal point for all people whose work covers current political and social questions of the cross-sectional issue of digitalization.]]></description>
                <author><![CDATA[Stiftung Neue Verantwortung e. V. <info@stiftung-nv.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[digitalcourage]]></title>
                <link>https://www.beyond-eve.com/en/organisations/digitalcourage</link>
                <description><![CDATA[<strong>Digitalcourage works for a liveable world in the digital age.</strong>

Since 1987, Digitalcourage advocates for fundamental rights, privacy and protecting personal data. We are a group of people from a variety of backgrounds who explore technology and politics with a critical mindset, and who want to shape both with a focus on human dignity.

We do not want our democracy to be “datafied” and sold out. We work against a society that turns people into targets for marketing, regards them as dispensable in times of a shrinking state, and places them under suspicion as potential terrorists. We stand for a living democracy.

Digitalcourage informs through publicity, speeches, events and congenial interventions. Every year we bestow the German Big Brother Awards (“Oscars for data leeches”). We contribute our expertise to the political process – sometimes without being invited.

More details in English on our background and history can be found on Wikipedia.]]></description>
                <author><![CDATA[digitalcourage <mail@digitalcourage.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:33 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Fraunhofer-Gesellschaft - Institute for Open Communication Systems]]></title>
                <link>https://www.beyond-eve.com/en/organisations/fraunhofer-gesellschaft-institute-for-open-communication-systems</link>
                <description><![CDATA[20 billion – this is how many connected devices will be in private homes and businesses by the year 2020. This development will fundamentally change communication and interaction in all areas of life and work, from highly automated driving and new entertainment options to smart cities and the factory of tomorrow. Digitalization should ensure a higher quality of life, more sustainability and more security. To achieve this, devices have to be connected – but so does (almost) everything else: people, things, systems, processes and organizations.

Innovative applications and business models usually come about through the intelligent integration of data from different sources and domains. An understanding of the different industry-specific and legal requirements is needed here, because there is not just one type of digital transformation. We have many years of experience in the fields of mobility, public safety, administration, e-health and media, with additional technical expertise in systems quality, network infrastructures and software-based systems.

We view ourselves as a provider- and technology-independent mediator between industry, research and the public sector. We advise our customers in politics, administration and industry on their digitalization strategy and help them implement it. To do this, we provide test environments and develop prototypes that are secure, interoperable, and user-oriented.]]></description>
                <author><![CDATA[Fraunhofer-Gesellschaft - Institute for Open Communication Systems <info@fokus.fraunhofer.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:21 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[DESY’s X-ray source PETRA III points possible ways to better RNA vaccines]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/desys-x-ray-source-petra-iii-points-possible-ways-to-better-rna-vaccines</link>
                <description><![CDATA[<p><strong>The pharmaceutical company BioNTech and the University of Mainz are conducting research with other partners on the EMBL beamline</strong></p><p>The Mainz-based biotech company BioNTech, which recently presented the first promising results for a coronavirus vaccine together with the US company Pfizer, is already conducting research on the next generation of RNA drugs at DESY’s X-ray source PETRA III. Using the P12 beamline, operated by the European Molecular Biology Laboratory EMBL, BioNTech has been investigating, together with the Universities of Mainz, Tel Aviv and Leiden as well as the Research Centre Jülich and EMBL, how so-called messenger RNA (mRNA) can be packaged better so as to be more effective in the target organism. The researchers are reporting a number of results in three papers, published in the journals <em>Applied Nano Materials</em>, <em>Cells</em> and <em>Langmuir</em>. The papers also illustrate the potential of analytical research carried out with the help of the research infrastructure available on the DESY campus.</p>]]></description>
                <author><![CDATA[Deutsches Elektronen-Synchrotron DESY]]></author>
                <pubDate>Fri, 11 Dec 2020 20:58:30 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Is the United States Tax System Favoring Excessive Automation?]]></title>
                <link>https://www.beyond-eve.com/en/events/is-the-united-states-tax-system-favoring-excessive-automation</link>
                <description><![CDATA[<p>As the next wave of information technology matures, many commentators worry about the job disruption that automation technology will bring. In a recent policy brief released by the&nbsp;<a href="https://workofthefuture.mit.edu/" rel="noopener noreferrer" target="_blank">MIT Task Force on the Work of the Future</a>, MIT economist Daron Acemoglu and his co-authors argued that the United States currently taxes machinery and equipment too little compared to labor, thereby encouraging excessive automation that eliminates jobs without making the economy more productive. ITIF President Rob Atkinson has argued in response that automation doesn’t lead to joblessness and that increasing taxes on automation equipment, including artificial intelligence, would hurt U.S. competitiveness and reduce real wage growth.</p><p>ITIF hosted a debate in which Acemoglu and Atkinson laid out their views about the future of automation technology and the effects it may have on U.S. competitiveness and the economy.</p><p><strong>Speakers&nbsp;</strong></p><p><a href="https://itif.org/person/bernie-becker" rel="noopener noreferrer" target="_blank">Bernie Becker</a></p><p>Tax Reporter, POLITICO, Moderator</p><p><a href="https://itif.org/person/robert-d-atkinson" rel="noopener noreferrer" target="_blank">Robert D. Atkinson</a></p><p>President, Information Technology and Innovation Foundation, Speaker</p><p><a href="https://itif.org/person/daron-acemoglu" rel="noopener noreferrer" target="_blank">Daron Acemoglu</a></p><p>Institute Professor, MIT, Speaker</p>]]></description>
                <author><![CDATA[Information Technology & Innovation foundation ITIF]]></author>
                <pubDate>Sat, 02 Jan 2021 12:15:46 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Algorithmic Knowledge Production – Principles, Problems, Prospects]]></title>
                <link>https://www.beyond-eve.com/en/events/algorithmic-knowledge-production-principles-problems-prospects</link>
                <description><![CDATA[The conference will discuss basic principles and problems of algorithmic knowledge production in contemporary science and society. Witnessed by most recent breakthrough research, quantum algorithms introduce new ways of processing information entirely at variance with traditional classical computation. Also, algorithms are now utilized in proving mathematical theorems. This forces us to scrutinize the notion of understanding and even to ask what this actually means for artificial and natural intelligence. In addition to such basic issues, the conference addresses concrete and specific applications: automated decision making and its legal consequences, the successes of machine learning in medical diagnostics, and the influence of speedy algorithms on financial markets and other areas of economics.

mit:
Prof. Dr. Joachim Buhmann (ETH Zürich)
Dr. Liesbeth de Mol (Université de Lille)
Prof. Dr. Markus Gabriel (Universität Bonn)
Prof. Dr. Renato Renner (ETH Zürich)
Prof. Dr. Florent Thouvenin (Universität Zürich)
Prof. Dr. Josef Teichmann (ETH Zürich)]]></description>
                <author><![CDATA[Collegium Helveticum]]></author>
                <pubDate>Sat, 05 Dec 2020 21:40:12 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Understanding the Challenges of Plant Science: Reflections from the Outside-In]]></title>
                <link>https://www.beyond-eve.com/en/events/understanding-the-challenges-of-plant-science-reflections-from-the-outside-in</link>
                <description><![CDATA[<p>The investigation of plant intelligence and sentience is here to stay. And yet, despite the growing body of literature on the subject, we appear not to be making headway. Controversies over plant intelligent behavior and consciousness are part of a long botanical tradition. But things are only getting worse in today’s academic culture of “fast science”. The result is a lack of a common language and subsequent misunderstandings and misdiagnoses. Many findings that have gripped the public’s imagination are proving difficult to replicate. I will illustrate how the experimental evidence on plant perception and learning brings a mixed bag of both supportive and inconclusive results. Doing better calls for placing the discussion outside of old and sterile battles, allowing for alternative frameworks of thinking. Doing better calls for the inclusion of counterarguments and adversarial collaboration; for respecting the guiding role that complementary, rather than competing, models and theoretical frameworks can play. Doing better calls for “slow science” and, echoing Ludwik Fleck, for the nourishment of social interactions in both the plant and cognitive science communities. </p><p><br></p><p>The goal of this talk is not to claim that plants are intelligent or that plant sentience (if it exists) is of the same kind as human consciousness. Neither taking for granted that plants are intelligent and/or sentient, nor dismissing the possibility that they are, I shall argue that the time is ripe to cast the problem in a scientifically tractable manner. The goal is to invite constructive debate, and to scrutinize established objections and thinking vetoes to better understand the challenges of plant science. <strong>Paco Calvo is a Professor of Philosophy of Science</strong>, and Principal Investigator of MINTLab (Minimal Intelligence Lab) at the University of Murcia (Spain). He specialized in the philosophy of cognitive science courtesy of a Fulbright scholarship in the late 1990s (University of California, San Diego), and received a PhD in Philosophy from the University of Glasgow (UK) in 2000. His research interests range broadly within the cognitive sciences, with special emphasis on ecological psychology, embodied cognitive science, and plant intelligence. </p>]]></description>
                <author><![CDATA[Collegium Helveticum]]></author>
                <pubDate>Sat, 05 Dec 2020 22:21:02 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[I, Scientist]]></title>
                <link>https://www.beyond-eve.com/en/events/i-scientist</link>
                <description><![CDATA[<p>We are delighted to invite you to the I, Scientist conference on gender, career paths and networking, to be held virtually on 16-19 September 2020. Registration is now open via <a href="https://year2020.iscientist.de/" rel="noopener noreferrer" target="_blank">https://year2020.iscientist.de/</a> and we have early bird tickets for the first 50 registrations. This year's conference highlights include inspiring talks by successful female and queer scientists and various exciting networking events. We will have as well a especial focus on race and academia, as well as the impact of COVID19 on gender equality and leadership. For more details please visit our website <a href="https://year2020.iscientist.de/" rel="noopener noreferrer" target="_blank">I,Scientist</a>. The conference aims to increase the visibility of diverse role models in the natural and empirical sciences, to introduce young researchers to a variety of career options, provide them with networking opportunities, and to raise awareness of gender-based biases. The conference is designed and organized by PhD students and young researchers of natural sciences, mathematics and empirical sciences for their peers. To raise awareness of unconscious gender-based, detect obstacles and find solutions regarding gender imbalance and combining family life and a career, a dialogue between all genders is needed. Therefore, all genders are welcome to attend! We are excited to meet you at I, Scientist 2020!</p>]]></description>
                <author><![CDATA[Lise-Meitner-Gesellschaft e.V. <kontakt@lise-meitner-gesellschaft.de>]]></author>
                <pubDate>Sun, 06 Dec 2020 11:13:41 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Video streaming: data transmission technology crucial for climate footprint]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/video-streaming-data-transmission-technology-crucial-for-climate-footprint</link>
                <description><![CDATA[<p>HD-quality video streaming produces different levels of greenhouse gas emissions depending on the transmission technology. The CO2 emissions generated by data processing in a data centre are relatively low, at 1.5 grams of CO2 per hour. However, the technology used to transmit data from the data centre to the user determines the climate compatibility of cloud services like video streaming. Greenhouse gas emissions can be reduced considerably, depending on the data transmission technology used. This is shown by initial research findings commissioned by the German Environment Agency. Picture: German Environment Agency (UBA)</p><p>The lowest <a href="https://www.umweltbundesamt.de/service/glossar/c?tag=CO2#alphabar" rel="noopener noreferrer" target="_blank">CO2</a> emissions are produced when HD video is streamed at home over a fibre optic connection, with only two grams of CO2 per hour of video streaming for the data centre and data transmission. A copper cable (VDSL) generates four grams per hour. UMTS data transmission (3G), however, produces 90 grams of CO2 per hour. If the transmission technology used to transmit data is 5G instead, only about five grams of CO2 are emitted per hour. The electricity used by the end device is not factored into this calculation.</p>]]></description>
                <author><![CDATA[Umweltbundesamt <buergerservice@uba.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 19:38:18 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[A Half Century of Internet: How it works today]]></title>
                <link>https://www.beyond-eve.com/en/events/a-half-century-of-internet-how-it-works-today</link>
                <description><![CDATA[The Internet connects more than half of the world's population. This revolutionary form of transmitting all kinds of data between places on the planet has made the network of networks the indispensable backbone of societies. The number of users has exploded to four billion people.

The speed of change is dramatic and for some breathtaking. Many well-known and even more unknown personalities have shaped the development of the Internet. However, this exciting success story also reveals the dark sides of this development. What has become of the original hope for a democratization of communication? To what extent has the Internet provided access to better educational opportunities? How do large Internet companies and governments use the Internet? How can you safely communicate over this network?]]></description>
                <author><![CDATA[Universität Potsdam - Hasso-Plattner-Institut <hpi-info@hpi.de>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:39:33 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Strategic Design Thinking For Every Day]]></title>
                <link>https://www.beyond-eve.com/en/events/strategic-design-thinking-for-every-day</link>
                <description><![CDATA[This is an online course for everyone who wants to use Strategic Design Thinking for everyday challenges. You learn to use the whole potential of the approach, going beyond the method and the tools. Equip yourself with the most impactful Design Thinking principles to unlock your innovation capacity in complex, highly constrained situations.]]></description>
                <author><![CDATA[Universität Potsdam - Hasso-Plattner-Institut <hpi-info@hpi.de>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:58:42 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Nature and Future of Information Confrontation]]></title>
                <link>https://www.beyond-eve.com/en/events/the-nature-and-future-of-information-confrontation</link>
                <description><![CDATA[The Center for the Governance of Change (CGC) is launching <strong>Conversations with the Future</strong>, a new videoseries on a range of issues related to technology, disruption and change that will bring together academics, experts and practitioners.

In this first episode, <strong>“The Nature and Future of Information Confrontation“</strong>, Peter Pomerantsev, Visiting Senior Fellow at the London School of Economics and Nina Jankowisz, Disinformation Fellow at the Wilson Center, address the issue of information conflict and disinformation in times of the Covid-19 pandemic and the Black Lives Matter protests. The talk is moderated by Oscar Jonsson, Academic Director of the CGC.]]></description>
                <author><![CDATA[IE University - The Center for the Governance of Change (CGC) <cgc@ie.edu>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:47:11 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Countering the COVID-19 Misinfodemic with Text Similarity and Social Data Science]]></title>
                <link>https://www.beyond-eve.com/en/events/countering-the-covid-19-misinfodemic-with-text-similarity-and-social-data-science</link>
                <description><![CDATA[<p>The Oxford Internet Institute is proud to present faculty member <strong>Dr Scott A. Hale </strong>for this next session in our Wednesday Webinar Series. The session will be moderated by Dr Chico Camargo, Postdoctoral Researcher in Data Science at the OII.</p><p><br></p><p>Misinformation about COVID-19 has led to severe harms in multiple instances: as an example, a rumor that drinking methanol would cure the virus resulted in hundreds of deaths. While end-to-end encryption is an important privacy safeguard, this encryption prevents platforms such as WhatsApp, Signal, and others from employing centralized interventions and warnings about misinformation. Several options, however, from user interface changes to tip lines to having more intelligence on client devices offer hope.</p><p><br></p><p>In this presentation Dr Scott A. Hale will discuss how text similarity algorithms are being used to help fact-checkers locate misinformation, cluster similar misinformation, and identify existing fact-checks in the context of tip lines on platforms with end-to-end encryption. The presentation will detail research at the Oxford Internet Institute and Meedan, a global technology not-for-profit developing open-source tools for fact-checking and translation, that is actively being used by fact-checkers to improve the information available online.</p>]]></description>
                <author><![CDATA[The Oxford Internet Institute <enquiries@oii.ox.ac.uk>]]></author>
                <pubDate>Sun, 06 Dec 2020 13:04:47 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Data, data (science), get us out of here!]]></title>
                <link>https://www.beyond-eve.com/en/events/data-data-science-get-us-out-of-here</link>
                <description><![CDATA[<p><strong>Data, data (science), get us out of here! Recommendations for resilient and fair policy-making in a crisis</strong> <strong>Prof Helen Margetts, Professor of Society and the Internet, OII and Director, Public Policy Programme at The Alan Turing Institute</strong> in discussion with Ben MacArthur, Professor within Mathematical Sciences at the University of Southampton. Covid-19 poses an extraordinary challenge for policy-makers. In the face of a new disease that has brought the world to a standstill, policy-makers have to identify at breakneck speed the optimal measures needed to save lives and restart the economy. Good data and solid modelling are crucial, yet we are seeing government after government fail at harnessing the power of these two critical tools. Policy-makers are struggling to understand what data they need to collect, what models they need to build, and what safeguards they must put in place in order to find a resilient and fair way out of this crisis. In this talk, we provide clarity and make concrete recommendations as to how policy-makers can ensure that data and data science are our ticket back to normality.</p>]]></description>
                <author><![CDATA[The Oxford Internet Institute <enquiries@oii.ox.ac.uk>]]></author>
                <pubDate>Sat, 05 Dec 2020 23:03:38 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Bridging the Gap Between EU Non-Discrimination Law and AI]]></title>
                <link>https://www.beyond-eve.com/en/events/bridging-the-gap-between-eu-non-discrimination-law-and-ai</link>
                <description><![CDATA[<p><strong>Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI</strong> Fairness and discrimination in algorithmic systems is globally recognised as a topic of critical importance. To date, a majority of work has started from an American regulatory perspective defined by the notions of ‘disparate treatment’ and ‘disparate impact’. European legal notions of discrimination are not, however, equivalent. In this talk I will examine EU law and jurisprudence of the European Court of Justice concerning non-discrimination. I will identify a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness. Algorithms are not similarly to human decision-making; they operate at speeds, scale and levels of complexity that defy human understanding, group and act upon classes of people that do not resemble historically protected groups, and do so without potential victims ever being aware of the scope and effects of decision-making. As a result, individuals may never be aware they have been disadvantaged and thus lack a starting point to raise a claim. A clear gap exists between statistical measures of fairness and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements historically used by the Court. The talk will focus on three contributions. First, I review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU’s current requirements are not fit to be automated. Second, I show that automating fairness or non-discrimination in Europe may be impossible because the law does not provide a static or homogenous framework. </p><p>Finally, I propose a statistical test as a baseline to identify and assess potential cases of algorithmic discrimination in Europe. Adoption of this statistical test will help push forward academic and policy debates around scalable solutions for fairness and non-discrimination in automated systems in Europe.</p>]]></description>
                <author><![CDATA[The Oxford Internet Institute <enquiries@oii.ox.ac.uk>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:56:45 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Joanna Bryson: The role of humans in an age of intelligent machines]]></title>
                <link>https://www.beyond-eve.com/en/events/joanna-bryson-the-role-of-humans-in-an-age-of-intelligent-machines</link>
                <description><![CDATA[<p>Artificial intelligence (AI) and the information age are bringing us more information about ourselves and each other than any society has ever known. Yet at the same time it brings machines seemingly more capable of every human endeavour than any human can be. What are the limits of AI? Of intelligence and humanity more broadly? What are our ethical obligations to machines? Do these alter our obligations to each other? What is the basis of our social obligations?</p><p>In her lecture Joanna Bryson will argue that there are really only two problems humanity has to solve: sustainability and inequality, or put another way: security and power. Or put a third way: how big of a pie can we make, and how do we slice up that pie? Life is not a zero-sum game. We use the security of sociality to construct public goods where everyone benefits. But still, every individual needs enough pie to thrive, and this is the challenge of inequality. Joanna Bryson will argue that understanding these processes answers the questions above. She will then look at how AI is presently affecting both these problems. </p><p><br></p><p><strong>Joanna J Bryson</strong>, Professor of Ethics and Technology at Hertie School, is an academic recognised for broad expertise on intelligence, its nature, and its consequences. She advises governments, transnational agencies, and NGOs globally, particularly in AI policy. She holds two degrees each in psychology and AI (BA Chicago, MSc &amp; MPhil Edinburgh, PhD MIT). Her work has appeared in venues ranging from reddit to the journal Science. She continues to research both the systems engineering of AI and the cognitive science of intelligence, with present focuses on the impact of technology on human cooperation, and new models of governance for AI and ICT. </p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Fri, 18 Dec 2020 12:15:09 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA['Lie Machines’ Online Book Launch]]></title>
                <link>https://www.beyond-eve.com/en/events/lie-machines-online-book-launch</link>
                <description><![CDATA[Professor Philip Howard presents his new book ‘Lie Machines’, which offers new insights into the world’s most damaging disinformation campaigns.

Philip N. Howard is the Director of the OII, and Professor of Internet Studies. He is a professor of sociology, information and international affairs.]]></description>
                <author><![CDATA[The Oxford Internet Institute <enquiries@oii.ox.ac.uk>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:57:51 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Design Thinking 4.0 – The Cultural Dimension of Digital Transformation]]></title>
                <link>https://www.beyond-eve.com/en/events/design-thinking-40-the-cultural-dimension-of-digital-transformation</link>
                <description><![CDATA[Design Thinking is an innovation approach, which evolved through the past 12 years from a university program at Stanford and HPI Potsdam to a globally respected and universally applied set of methods and tools for supporting and driving change towards a networked culture in organizations. The course is an introduction to the core principles of Design Thinking, explains its cultural impact and inspires to actively use Design Thinking at the organizational level.

The course is valuable for decision makers, who want to get an idea about the strategic underpinnings of design thinking. They will learn the terminology and get a better understanding, why and how to use Design Thinking to make the transformation towards a networked organization.

The course is not a substitute for a real design thinking-workshop, which will give - at its best - the deep and diverse team experience in a creative environment to the participants. But it helps, to get a better understanding of the core concepts behind Design Thinking and supports the development of their transformational strategy.]]></description>
                <author><![CDATA[Universität Potsdam - Hasso-Plattner-Institut <hpi-info@hpi.de>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:58:21 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Challenges in Digital Technology Then and Now]]></title>
                <link>https://www.beyond-eve.com/en/events/challenges-in-digital-technology-then-and-now</link>
                <description><![CDATA[<p>Governments and publics are increasingly asking that tech companies work to address the challenges and adapt to the changes technology has unleashed, from digital security to the impact of the COVID-19 pandemic. At the core of these new expectations is the sense that world-changing technologies must be governed in accordance with a broad ethic of responsibility – to individual users and to society at large.</p><p>In this conversation, <strong>Jonathan Zittrain</strong> was joined by Microsoft President <strong>Brad Smith</strong> to discuss how big tech might rise to these new challenges and opportunities.</p><p><br></p><p><em>This event is part of the&nbsp;</em><a href="https://cyber.harvard.edu/programs/ai-policy-practice" rel="noopener noreferrer" target="_blank"><em>AI Policy Practice.</em></a></p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 21:00:36 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Philipp Staab – The crises of digital capitalism]]></title>
                <link>https://www.beyond-eve.com/en/events/philipp-staab-the-crises-of-digital-capitalism</link>
                <description><![CDATA[<p>For around 50 years, digital technologies have been the key to economic transformation. However, it is only since the late 1990s that we have begun to see the emergence of a genuinely digital capitalism with the commercial Internet at its core. Leading digital companies such as Google, Apple, Facebook and Amazon are assuming a key position for ever larger parts of the economy. They have not only survived periodic crises of capitalism unscathed, but have even grown from them. At the present time, when the acute crisis of public health is about to turn into a socio-economic crisis of the global economy, it is therefore necessary to ask: What is digital capitalism? How are digitalisation and socio-economic crises related? And: What can we learn from this about a post-Corona world?</p><p><br></p><p><strong>Philipp Staab</strong>&nbsp;is Professor of “Sociology of the Future of Work” at the Humboldt University of Berlin and the <a href="https://www.digital-future.berlin/" rel="noopener noreferrer" target="_blank">Einstein Center Digital Future</a> (ECDF). As a sociologist he deals with the topics of technology, work, political economy and social inequality. In his research in recent years, he has focused on the leading companies of the commercial Internet such as Google, Amazon, Apple, Facebook, Alibaba and Tencent, as well as various start-ups.</p><p><br></p><p><strong>The event will be held in English.</strong></p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Fri, 18 Dec 2020 12:21:38 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Co-Innovation Journey for Startups and Corporates]]></title>
                <link>https://www.beyond-eve.com/en/events/co-innovation-journey-for-startups-and-corporates</link>
                <description><![CDATA[In times of a more disruptive, complex and dynamic world than ever experienced before, innovation is no longer a luxury, but the precondition of business survival. Forward-thinking, established companies are turning innovation challenges into opportunities and are teaming up with fast, creative startups to disrupt jointly whole industries. The competition to survive is replaced by collaboration to thrive – to thrive in this new, exciting ecosystem of opportunities.

However, this is only true for selected actors – the majority is not making use of this collaboration potential. This course will guide you how to prepare, plan and implement a mutually beneficial collaboration, regardless if you are a startup, a corporate or generally interested to reap the potentials.]]></description>
                <author><![CDATA[Universität Potsdam - Hasso-Plattner-Institut <hpi-info@hpi.de>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:59:25 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Legal Tech – potentials and applications of technology based legal consulting ]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/legal-tech-potentials-and-applications-of-technology-based-legal-consulting</link>
                <description><![CDATA[<p>Since there is currently a high level of dynamism with regard to the development of new business models and the establishment of legal tech companies with a focus on legal advice and legal services, the TAB has published a study on their potential and applications.</p><p>TAB's policy brief in English <a href="https://www.tab-beim-bundestag.de/en/pdf/publications/tab-fokus/TAB-Fokus-024.pdf" rel="noopener noreferrer" target="_blank">TAB-Fokus no.&nbsp;24 PDF&nbsp;[2,58&nbsp;MB]</a> provides an overview of Legal Tech services and applications, assesses the potentials, risks and opportunities involved and explores further potential needs for action.</p>]]></description>
                <author><![CDATA[KIT - Karlsruher Institut für Technologie - Office of Technology Assessment at the German Bundestag <buero@tab-beim-bundestag.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 19:41:13 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Why Fairness Cannot Be Automated]]></title>
                <link>https://www.beyond-eve.com/en/events/why-fairness-cannot-be-automated</link>
                <description><![CDATA[<p>Fairness and discrimination in algorithmic systems are globally recognized as topics of critical importance. To date, the majority of work in this area starts from an American regulatory perspective defined by the notions of ‘disparate treatment’ and ‘disparate impact.’ But European legal notions of discrimination are not equivalent.&nbsp;</p><p><br></p><p>In this talk, <strong>Sandra Wachter</strong>, Visiting Professor at Harvard Law School and Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, robotics and Internet Regulation at the Oxford Internet Institute (OII) at the University of Oxford, examines EU law and jurisprudence of the European Court of Justice concerning non-discrimination and identifies&nbsp;a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness.&nbsp;</p><p><br></p><p>Wachter discusses&nbsp;the evidential requirements for bringing a claim under EU non-discrimination law and propose a statistical test as a baseline to identify and assess potential cases of algorithmic discrimination in Europe.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:58:52 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Bot or Human? Unreliable Automatic Bot Detection]]></title>
                <link>https://www.beyond-eve.com/en/events/bot-or-human-unreliable-automatic-bot-detection</link>
                <description><![CDATA[<p>The identification of bots is an important and complicated task. The <strong>bot classifier Botomete</strong>r was successfully introduced as a way to estimate the number of bots in a given list of accounts and has been frequently used in academic publications.</p><p>Given its relevance for academic research, and our understanding of the presence of automated accounts in any given Twitter discourse, <strong>Adrian Rauchfleisch</strong> and <strong>Jonas Kaiser</strong> studied Botometer's diagnostic ability over time. To do so, Rauchfleisch and Kaiser collected the Botometer scores for five datasets in two languages (English/German) over three months. For this virtual event, Rauchfleisch and Kaiser discussed their findings and answered questions about the implications of their research.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:57:26 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[All Data Are Local]]></title>
                <link>https://www.beyond-eve.com/en/events/all-data-are-local</link>
                <description><![CDATA[<p>“In our data-driven society, it is too easy to assume the transparency of data. Instead, we should approach data sets with an awareness that they are created by humans and their dutiful machines, at a time, in a place, with the instruments at hand, for audiences that are conditioned to receive them,” says <strong>Yanni Alexander Loukissas</strong>,&nbsp;Assistant Professor of Digital Media in the School of Literature, Media, and Communication at Georgia Tech.</p><p>All data are local. The term&nbsp;data set&nbsp;implies something discrete, complete, and portable, but it is none of those things. Examining a series of sources important for understanding public data in the United States—Harvard's Arnold Arboretum, the Digital Public Library of America, UCLA's Television News Archive, and the real estate marketplace Zillow—this talk explains how to analyze data settings rather than data sets.</p><p><br></p><p>The talk sets out six principles: all data are local; data have complex attachments to place; data are collected from heterogeneous sources; data and algorithms are inextricably entangled; interfaces recontextualize data; and data are indexes to local knowledge. Then, it provides a set of practical guidelines to follow. These findings are based on a combination of qualitative research on data cultures and exploratory data visualizations. Rebutting the myth of “digital universalism,” this work reminds audiences of the meaning-making power of the local.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:55:38 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Algorithmic or human bias? Understanding discrimination in the gig economy]]></title>
                <link>https://www.beyond-eve.com/en/events/algorithmic-or-human-bias-understanding-discrimination-in-the-gig-economy</link>
                <description><![CDATA[The rapid expansion of the gig economy has raised concerns about the role of algorithms in labor markets. Two central concerns are the potential to exacerbate discrimination in hiring and the suppression of worker wages. By comparison, bias in human decision-making in the gig economy context has not received similar attention. This lecture will redirect attention to human choices, and explore ways in which gig economy platforms create conditions that favor the activation of stereotypes in online hiring. The lecture draws from field experiments and the analysis of transactional data to reveal the mechanisms that result in inferior outcomes for women and online workers based in the Global South.

About the speaker
<strong>Hernan Galperin</strong> (Ph.D., Stanford University) is Associate Professor and Assistant Dean at the Annenberg School for Communication, University of Southern California. He is also Director of the Annenberg Research Network on International Communication (ARNIC). His other affiliations include the USC Annenberg Innovation Lab, the USC Price Spatial Analysis Lab, and the USC Price Center for Social Innovations. Previously, he served as Associate Professor and Founder-Director of the Center for Technology and Society at the Universidad de San Andrés (Argentina).]]></description>
                <author><![CDATA[The Oxford Internet Institute <enquiries@oii.ox.ac.uk>]]></author>
                <pubDate>Sat, 05 Dec 2020 21:40:29 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Advancing Racial Literacy in Tech]]></title>
                <link>https://www.beyond-eve.com/en/events/advancing-racial-literacy-in-tech</link>
                <description><![CDATA[<p>Dr. Howard Stevenson of the University of Pennsylvania kicked&nbsp;off the Berkman Klein Spring 2020 Luncheon Series with a talk and discussion on&nbsp;Advancing Racial Literacy in Tech.&nbsp;Racial literacy provides a framework for considering how to combat the proliferation of racially-biased technology.&nbsp;Dr. Stevenson was joined in conversation by Jessie Daniels and Mutale Nkonde.&nbsp;</p><p><strong>Dr. Howard Stevenson</strong> is the Constance Clayton Professor of Urban Education, Professor of Africana Studies, in the Human Development &amp; Quantitative Methods Division of the Graduate School of Education at the University of Pennsylvania. He is the Executive Director of the Racial Empowerment Collaborative at Penn, designed to promote racial literacy in education, health, community and justice institutions.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:54:08 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AdTech and the Future of the Internet]]></title>
                <link>https://www.beyond-eve.com/en/events/adtech-and-the-future-of-the-internet</link>
                <description><![CDATA[Regardless of their outcome, current investigations into the compliance of the AdTech industry with data protection law will define the conditions under which the internet's key business model will function in future. Building on the discussion in the panel organised by the Open Rights Group at 11.45, this panel will bring together key stakeholders in the AdTech-data protection discussion and will seek to chart the landscape, opportunities and challenges for AdTech moving forward. Among others, the panel will consider the following questions:

What are the likely outcomes of the current investigations into AdTech's compliance with data protection law?

- Will the AdTech industry be able to make the changes required?
- How will the online advertising ecosystem look in the future?
- How till the internet change as a result of changes in AdTech?]]></description>
                <author><![CDATA[Computers, Privacy & Data Protection]]></author>
                <pubDate>Sat, 05 Dec 2020 21:59:59 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Data Economy, AI, Privacy and Sustainability in times of Climate Emergency]]></title>
                <link>https://www.beyond-eve.com/en/events/data-economy-ai-privacy-and-sustainability-in-times-of-climate-emergency</link>
                <description><![CDATA[In 2008, the Internet was already responsible for the 2% of CO2 global emissions, exceeding those of the entire aviation industry. The amount of users and network connections has increased at a whopping pace ever since. As an indication of this: global energy demand related to internet-connected devices is increasing 20% a year. In 2015 ICT already accounted for 3-5% of the world’s electricity use and it is expected that, by 2025, ICT will consume 20% of the world's electricity, which would potentially hamper global attempts to meet climate change targets. Given the growing significance of this impact on the global economy, there is an urgent need to raise awareness and ensure more sustainable and responsible development whilst harnessing the huge potential for adding value in society. This panel will discuss how society can efficiently tackle the critical environmental toll of our current data ecosystem and imagine future sustainable technologies and modes of operating within these technologies. The panel will consider, among other issues:

1. The hidden environmental impact ofthe current data economy
2. Materiality of AI and futureenvironmental costs of automatizing actions
3. Environmental impact of devices:ethical and environmental concerns on mineral sourcing 
4. Policy for sustainable privacy, dataeconomy and AI]]></description>
                <author><![CDATA[Computers, Privacy & Data Protection]]></author>
                <pubDate>Sat, 05 Dec 2020 21:52:27 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Sharenthood: How Parents, Teachers, and Other Trusted Adults Harm Youth Privacy & Opportunity]]></title>
                <link>https://www.beyond-eve.com/en/events/sharenthood-how-parents-teachers-and-other-trusted-adults-harm-youth-privacy-opportunity</link>
                <description><![CDATA[<p>A new book by BKC Faculty Associate and Youth &amp; Media team member Leah Plunkett joins works by Margaret Atwood and Stephen King on&nbsp;<em>Wired</em>'s list of "<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.wired.com_story_2019-2Dfall-2Dbook-2Dlist_&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=CHTneD5RSmS_iQO9PMNALYKVR4KMExQFqbFqN3Fz0EE&amp;m=9vD3zUfmaxbfhqdluuZQ5t5BKutj2KHi_tWUxGXmGf0&amp;s=C91Oe5Z4sLlLNOPhynZ_CsMAjckdLad2MOKFSdhjSzI&amp;e=" rel="noopener noreferrer" target="_blank">must-read</a>" books for fall 2019.&nbsp;<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__leahplunkett.com_&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=CHTneD5RSmS_iQO9PMNALYKVR4KMExQFqbFqN3Fz0EE&amp;m=9vD3zUfmaxbfhqdluuZQ5t5BKutj2KHi_tWUxGXmGf0&amp;s=vKyDHFbwsuHH-Iili3z5YGmQcA3bv1KpK6uE_OAYFKc&amp;e=" rel="noopener noreferrer" target="_blank">Leah's book</a>&nbsp;from MIT Press,&nbsp;<em>Sharenthood: Why We Should Think Before We Talk About Our Kids Online</em>,&nbsp;"illuminates children's digital footprints: the digital baby monitors, the daycare livestreams, the nurse's office health records, the bus and cafeteria passes recording their travel and consumption patterns―all part of an indelible dossier for anyone who knows how to look for it. Plunkett thinks the offspring surveillance ought to stop and has suggestions for how to kick the sharenting habit. They are worth considering." </p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:53:03 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Who Governs the Internet?]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/who-governs-the-internet</link>
                <description><![CDATA[<p>Based on the guiding principle „digital policy means social policy“, this publication follows the idea that internet governance affects everyone. An open, free and global Internet is vital for all. Therefore, infrastructures for surveillance and censorship should not be established.</p><p>This publication gives an overview of actors and areas of action and stresses that collective engagement is needed more than ever to further develop Internet governance, to strengthen multistakerholderism as well as multilateralism and to hinder the fragmentation of the net. The publication was created by iRights.Lab on behalf the FES.</p><p><a href="http://library.fes.de/pdf-files/akademie/15917.pdf" rel="noopener noreferrer" target="_blank">Here</a> you find the online version of "Who Governs the Internet?"</p>]]></description>
                <author><![CDATA[Friedrich Ebert Stiftung]]></author>
                <pubDate>Sat, 12 Dec 2020 19:43:49 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Between Truth and Power]]></title>
                <link>https://www.beyond-eve.com/en/events/between-truth-and-power</link>
                <description><![CDATA[<p>Our current legal system is to a great extent the product of&nbsp;an earlier period of social and economic transformation.&nbsp;From the late 19th century through the mid-20th century, the&nbsp;U.S. legal system underwent profound, tectonic shifts.&nbsp;Today, struggles over ownership of information-age&nbsp;resources and accountability for information-age harms are&nbsp;producing new systemic changes.&nbsp;In&nbsp;<em>Between Truth and Power</em>, Julie E. Cohen explores the&nbsp;relationships between legal institutions and political and&nbsp;economic transformation. Systematically examining&nbsp;struggles over the conditions of information flow and the&nbsp;design of information architectures and business models,&nbsp;she argues that as law is enlisted to help produce the&nbsp;profound economic and sociotechnical shifts that have&nbsp;accompanied the emergence of the informational economy,&nbsp;it too is transforming in fundamental ways.</p><p><br></p><p><strong>Julie E. Cohen</strong> is the Mark Claster Mamolen Professor of Law and Technology at the Georgetown University Law Center. She teaches and writes about surveillance, privacy and data protection, intellectual property, information platforms, and the ways that networked information and communication technologies are reshaping legal institutions. She is the author of&nbsp;<em>Between Truth and Power: The Legal Constructions of Informational Capitalism</em>&nbsp;(Oxford University Press, 2019);<em>Configuring the Networked Self: Law, Code and the Play of Everyday Practice</em>&nbsp;(Yale University Press, 2012), which won the 2013 Association of Internet Researchers Book Award and was shortlisted for the&nbsp;<em>Surveillance &amp; Society</em>&nbsp;Journal’s 2013 Book Prize;<em>&nbsp;</em>and numerous journal articles and book chapters. </p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:52:09 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Queen's Lecture 2019]]></title>
                <link>https://www.beyond-eve.com/en/events/queens-lecture-2019</link>
                <description><![CDATA[<p><strong>Professorin Corinne Le Quéré: „The interactions between climate change and the carbon cycle and the future we choose“</strong> This year will be remembered as the year the world woke up to the climate crisis – and it’s about time! Climate change is unfolding as predicted by scientists repeatedly and consistently over the past thirty years at least. We can now see the changes with our own eyes, and the impacts look a lot scarier in reality than on paper. But just how did we get here, and what comes next? This lecture will present the scientific basis for climate change through the lenses of the natural carbon cycle. It will show how emissions of carbon dioxide (CO2) from human activities have caused the planet to warm, and have set in motion a train of changes in the natural carbon cycle. Every year, the land and ocean natural carbon reservoirs, the so-called carbon ‘sinks’, absorb 55% on average of the CO2 emissions we release in the atmosphere from burning fossil fuels, deforestation, and other activities. The carbon sinks slow down the rate of climate change, but they themselves respond to a changing climate, by leaving more CO2 in the atmosphere. The latest evidence on trends in emissions and carbon sinks of the past 60 years, reveals the limits of our understanding and the challenges we face to develop a planetary monitoring system that can keep track of the rapidly changing carbon cycle. The lecture will incorporate in the science of climate change and how it interacts with the carbon cycle, with the evolving relationship between scientists and society during the past decades. It will detail the growing momentum of global political leadership emerging to tackle climate change, the challenges that we face, and offer reflections on ways to bring about the future we choose. <strong>Corinne Le Quéré</strong> is Royal Society Research Professor of Climate Change Science at the University of East Anglia. She is a member of the UK Committee on Climate Change and in France chair of the related Le Haut Conseil pour le climat. more The Queen's Lectures are supported by the British Embassy and the British Council Germany. </p><p><br></p><p><em>The lecture will be held in English.</em></p>]]></description>
                <author><![CDATA[Technische Universität Berlin]]></author>
                <pubDate>Fri, 04 Dec 2020 12:58:37 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Palantir, the secretive data behemoth linked to the Trump administration, expands into Europe]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/palantir-the-secretive-data-behemoth-linked-to-the-trump-administration-expands-into-europe</link>
                <description><![CDATA[<p><strong>The data analysis company, known in particular for running the deportation machine of the Trump administration, is expanding aggressively into Europe. Who are its clients?</strong></p><p>Palantir was founded in 2004, in the wake of the September 11 attacks. Its founders wanted to help intelligence agencies organize the data they collected, so that they would identify threats before they could strike. It is widely rumored that its tools helped find Osama Bin Laden prior to his assassination in 2011 (another theory is that the US simply <a href="https://www.lrb.co.uk/v37/n10/seymour-m-hersh/the-killing-of-osama-bin-laden" rel="noopener noreferrer" target="_blank">bribed</a> Pakistani officials).</p><p>But Palantir is not good at making money. The company has never been profitable, in large part because it had to customize its products for each client, making economies of scale impossible. A new product launched in 2017, called Foundry, is supposed to solve this problem. Europe <a href="https://www.latimes.com/business/la-fi-palantir-sales-ipo-20190107-story.html" rel="noopener noreferrer" target="_blank">became</a> the testing ground for this new commercial strategy, which relies largely on Foundry.</p><p>AlgorithmWatch asked close to forty German companies about their links to Palantir and browsed hundreds of open sources to map Palantir’s clients.</p><p><br></p><p>Palantir’s software is nothing special. Despite claims that it could turn “data landfills into gold mines,” it simply provides a visual interface that lets clients interact with their own data streams. It is built on top of existing technologies such as Apache Spark, a cluster-computing framework. An employee, who might not be privy to every product of the company, wrote in 2016 that Palantir did “no artificial intelligence”, “no machine learning” and “no magic”.</p><p>These relatively modest capabilities might explain why several clients, including American Express and Coca-Cola, dropped Palantir in the last few years. Giovanni Tummarello, co-founder of the Ireland-based <a href="http://Siren.io" rel="noopener noreferrer" target="_blank">Siren.io</a>, a competitor, claimed in 2017 to have signed some of Palantir’s former clients, mostly due to lower prices.</p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Sat, 12 Dec 2020 18:24:12 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The future of health data]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/facebook-enables-automated-scams-but-fails-to-automate-the-fight-against-them-2</link>
                <description><![CDATA[<h3>A guide to a research-compatible electronic patient file</h3><p>Under the title “Zukunft Gesundheitsdaten — Wegweiser zu einer forschungskompatiblen elektronischen Patientenakte” (Future health data — a guide to a research-compatible electronic patient file), the iRights.Lab developed a comprehensive study on the subject of eHealth on behalf of Bundesdruckerei (federal printing house). It shows which challenges have to be mastered so that Germany can also use the potential of digitalization in the field of eHealth.</p><p><a href="https://www.bundesdruckerei.de/system/files/dokumente/pdf/Studie_Zukunft-Gesundheitsdaten.pdf" rel="noopener noreferrer" target="_blank">Here the study can be dowloaded in German. </a></p>]]></description>
                <author><![CDATA[iRights.Lab GmbH <kontakt@irights-lab.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 18:52:47 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Berlin Science Week – Sustainable Digitalisation in Urban Areas]]></title>
                <link>https://www.beyond-eve.com/en/events/berlin-science-week-sustainable-digitalisation-in-urban-areas</link>
                <description><![CDATA[<p>As part of this year's Berlin Science Week, the Alexander von Humboldt Institute for Internet and Society (HIIG), the Einstein Center Digital Future (ECDF) and the Weizenbaum Institute for the Networked Society are organizing a joint event on "Sustainable Digitization in Urban Areas".</p><p>The first part of the event consists of three virtual parallel workshops. The HIIG is proud to host the online workshop: “Citizens, give us your problems! How to Open Data without giving it away.” The event will conclude with a panel discussion about the workshops outcomes and the overarching question of how to enable a sustainable digitalisation in cities like Berlin. The virtual panel discussion will be open to a broader public through a livestream (on this website).</p><p><strong>Panel Speaker</strong></p><p><strong>Andrea Cominola | </strong>Junior Professor for Smart Water Networks at the <a href="https://www.digital-future.berlin/forschung/projekte/smart-water-survey/" rel="noopener noreferrer" target="_blank">Einstein Center Digital Future</a> (ECDF) and Technische Universität Berlin. His research focuses on the modeling and management of water and energy demand, the detection of leakages and cyber-physical anomalies, behavior modeling, data mining and machine learning.</p><p><strong>Luiza Bengtsson</strong> | <a href="https://www.hiig.de/en/research/data-actors-infrastructures/" rel="noopener noreferrer" target="_blank">Data, Actors, Infrastructures</a>&nbsp;team member at HIIG and works on implementing Data &amp; Society Interface research projects with the vision to enable open data access for public good, without data sharing in the classical sense and without collateral damage to individuals or institutions.</p><p><strong>Ophélie Ivombo | </strong>Program officer for Digitisation of the Consumer Advice Centre Berlin and <a href="https://digitalesberlin.info/" rel="noopener noreferrer" target="_blank">Bündnis Digitale Stadt Berlin</a>.&nbsp;</p><p><strong>Thomas Krause |</strong> Project Manager Digitisation Strategy, <a href="https://www.berlin.de/sen/web/" rel="noopener noreferrer" target="_blank">Senate Department for Economics, Energy and Public Enterprises.</a></p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Fri, 18 Dec 2020 14:29:00 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Ethics of the Digital Transformation]]></title>
                <link>https://www.beyond-eve.com/en/events/ethics-of-the-digital-transformation</link>
                <description><![CDATA[<p>The Berkman Klein Center for Internet &amp; Society at Harvard University was delighted to welcome the President of Germany, <a href="https://www.bundespraesident.de/EN/Home/home_node.html" rel="noopener noreferrer" target="_blank">Dr. Frank-Walter Steinmeier</a>, to campus for a special event on November 1&nbsp;to discuss the Ethics of the Digital Transformation.</p><p>President Steinmeier&nbsp;participated in an interactive panel session on the ethical, legal, and societal implications of digital technologies across geographies and as viewed from different disciplines. The discussion was moderated by <a href="https://cyber.harvard.edu/people/ugasser" rel="noopener noreferrer" target="_blank">Urs Gasser</a>, Executive Director of the Berkman Klein Center and Professor of Practice at Harvard Law School, and&nbsp;included a Q&amp;A.</p><p><br></p><p>In addition to the German President, the following experts&nbsp;joined the open dialogue:</p><ul><li><a href="https://www.wzb.eu/en/persons/jeanette-hofmann" rel="noopener noreferrer" target="_blank">Jeanette Hofmann</a>, Director of the Alexander von Humboldt Institute for Internet and Society, Professor of Internet Politics, Free University of Berlin</li><li><a href="https://publichealth.nyu.edu/faculty/s-matthew-liao" rel="noopener noreferrer" target="_blank">Matthew Liao</a>, Director of the Center for Bioethics, Arthur Zitrin Professor of Bioethics, New York University</li><li><a href="https://mnobles.mit.edu/about-melissa-nobles" rel="noopener noreferrer" target="_blank">Melissa Nobles</a>, Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences, Professor of Political Science, MIT</li><li><a href="https://www.hiig.de/en/wolfgang-schulz/" rel="noopener noreferrer" target="_blank">Wolfgang Schulz</a>, Director of the Leibniz Institute for Media Research |&nbsp;Hans-Bredow-Institute, Professor for Media Law and Public Law, University of Hamburg</li><li><a href="https://www.uni-goettingen.de/en/weber-guskar-eva-pd-dr/80253.html" rel="noopener noreferrer" target="_blank">Eva Weber-Guskar</a>, Guest Professor for Philosophy, Humboldt University of Berlin</li><li><a href="https://hls.harvard.edu/faculty/directory/11405/Yang" rel="noopener noreferrer" target="_blank">Crystal S. Yang</a>, Professor of Law, Harvard Law School</li></ul><p><br></p><p>This special event&nbsp;is part of the Berkman Klein Center’s<a href="https://cyber.harvard.edu/topics/ethics-and-governance-ai" rel="noopener noreferrer" target="_blank"> Ethics and Governance of Artificial Intelligence</a> Initiative.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:48:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[3rd Edge Computing Forum]]></title>
                <link>https://www.beyond-eve.com/en/events/3rd-edge-computing-forum</link>
                <description><![CDATA[Since the 1960's we have observed paradigm shifts in the context of distributed computing from mainframes to client-server models and back to centralized cloud approaches. The next development will include the distribution of intelligence back to the topological edge of the network. This natural evolution decreases dependency and load on the network and enhances data privacy and protection. Edge Computing is applied in several application domains and enables areas such as 5G, Artificial Intelligence, Industrial Internet of Things, or Digital Twins.

As the Edge Computing market is estimated to generate a value of up to 19 Billion EUR by 2023, at the forum the latest technological approaches and their benefits in the area of Edge Computing will be presented to discuss open issues to build an industrial Edge-based ecosystem by making infrastructures interoperable, programmable, secure and easy to use. This includes the identification of reference architectures, open standards, available implementations, reference technology stacks and evaluation within use cases. Further, best practices and experiences gained from recent testbeds will be presented. More than 100 decision makers, key experts, innovators and early adopters from companies such as Siemens, IBM, Siemens, Huawei, Vodafone, Telekom etc. were represented last year.]]></description>
                <author><![CDATA[Fraunhofer-Gesellschaft - Institute for Open Communication Systems <info@fokus.fraunhofer.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:22 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Contesting Algorithms]]></title>
                <link>https://www.beyond-eve.com/en/events/contesting-algorithms</link>
                <description><![CDATA[<p>Artificial Intelligence (AI) is transforming the way we govern human behavior, undermining the checks and balances which were intended to safeguard fundamental freedoms in liberal democracies. Governance by AI is a dynamic, data driven framework of governance, where decisions regarding a particular instance would be shaped by data analytics. Systems are designed ex ante to optimize certain functions, in non-transparent manners which challenge oversight and may bypass the social contract. This could be game-changing for democracy, as it facilitates the rise of unchecked power.</p><p>The case of content moderation by online platforms offers an interesting example. Platforms, such as Google, Facebook and Twitter, are responsible for mediating much of the public discourse and governing access to speech and speakers around the world. Social media platforms use AI to match users and content, to adjudicate conflicting claims regarding the legitimate use of content on their systems and to detect and expeditiously remove illegal content. AI in content moderation is applied as a practical need to operate in a dynamic, ever growing digital landscape; as an innovative competitive advantage; or simply as measures to ensure legal compliance or to avoid a public outcry.</p><p>The use of AI to filter unwarranted content cannot be sufficiently addressed by traditional legal rights and procedures, since these tools are ill-equipped to address the robust, non-transparent and dynamic nature of governance by AI. Consequently, in a digital ecosystem governed by AI, we currently lack sufficient safeguards against the blocking of legitimate content. Moreover, we lack a space for negotiating meaning and for deliberating the legitimacy of particular speech.</p><p><br></p><p><strong>Professor Elkin-Koren </strong>is the coauthor of The Limits of Analysis: Law and Economics of Intellectual Property in the Digital Age (2012) and Law, Economics and Cyberspace: The effects of Cyberspace on the Economic Analysis of Law (2004). She is the coeditor of Law and Information Technology (2011) and The Commodification of Information (2002). Her publications are listed <a href="http://law.haifa.ac.il/index.php/en/faculty-e/academic-staff/70-english/staff-eng/faculty-eng/419-el" rel="noopener noreferrer" target="_blank">here</a>.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:47:24 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Identity-management and citizen scoring in Ghana, Rwanda, Tunisia, Uganda, Zimbabwe and China]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/identity-management-and-citizen-scoring-in-ghana-rwanda-tunisia-uganda-zimbabwe-and-chinanda-simbabwe-und-china</link>
                <description><![CDATA[<p><strong>A review of identity-management practices in five African countries shows that much of the continent is well on its way towards comprehensive biometric registration. It could enable comprehensive citizen scoring or automated surveillance in the near future.</strong></p><p><em>The report </em><a href="https://algorithmwatch.org/wp-content/uploads/2019/10/Identity-management-and-citizen-scoring-in-Ghana-Rwanda-Tunesia-Uganda-Zimbabwe-and-China-report-by-AlgorithmWatch-2019.pdf" rel="noopener noreferrer" target="_blank">Identity-management and citizen scoring in Ghana, Rwanda, Tunisia, Uganda, Zimbabwe and China</a><em> was commissioned to AlgorithmWatch by a public-sector organization, which asked not to be cited, last May. We recently obtained permission to publish it here.</em></p><p>In many African countries, the obligation to issue biometric passports in the early 2000s, which the United States and, later, members of the European Union demanded, opened the door to the biometric registration of whole populations. An industry was set up to provide fingerprints readers, facial recognition technology and a vast array of software to process this newly-acquired data.</p><p>by Nicolas Kayser-Bril</p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Sat, 12 Dec 2020 18:59:14 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Ethical guidelines issued by engineers’ organization fail to gain traction]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/ethical-guidelines-issued-by-engineers-organization-fail-to-gain-traction</link>
                <description><![CDATA[<p><strong>The world’s largest professional association of engineers released its ethical guidelines for automated systems March 2019. A review by <em>AlgorithmWatch</em> shows that Facebook and Google have yet to acknowledge them.</strong></p><p>In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of <em>Ethically Aligned Design, </em>a 300-page treatise on the ethics of automated systems.</p><p>The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in <a href="https://www.nature.com/articles/s42256-019-0088-2.epdf?shared_access_token=QqMd1vZyWLBXUuripKch8dRgN0jAjWel9jnR3ZoTv0NeAfCrIeec5HgDC9f_3XDejMciaob5pTEfucwORxJuEsbLxxbUdajcqFpyxuMc9upBx5IQscFIFTmEht_SfpmSoaNOz0RlQKi0LO5ZVCWJTw%3D%3D" rel="noopener noreferrer" target="_blank">an article published in Nature Machine Intelligence</a> in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers’ association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”</p><p>Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.</p><p><em>By </em><strong><em>Nicolas Kayser-Bril</em></strong><em> Additional research: </em><strong><em>Veronika Thiel</em></strong></p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Sat, 12 Dec 2020 19:04:03 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AI NOW 2019 Symposium]]></title>
                <link>https://www.beyond-eve.com/en/events/ai-now-2019-symposium</link>
                <description><![CDATA[<strong>The Growing Pushback Against Harmful AI</strong>

The AI Now 2019 Symposium provided behind-the-scenes insights from those at the frontlines of the growing pushback against harmful AI.

Our program featured leading lawyers, organizers, scholars, and tech workers, all of whom have engaged creative strategies to combat exploitative AI systems across a wide range of contexts, from automated allocation of social services, to policing and border control, to worker surveillance and exploitation, and well beyond.

We shared the stories of those at the forefront, drawing on their insight and experience as we work together to ensure that AI is accountable to the people whose lives it most affects.]]></description>
                <author><![CDATA[New York University - AI Now Institute]]></author>
                <pubDate>Fri, 04 Dec 2020 21:21:23 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[“Robot judges” without training?]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/robot-judges-without-training</link>
                <description><![CDATA[<p><em>Discussing the implementation of automated decision making systems as savior of overburdened legal decision makers is en vogue. But if employed instead of human decision makers and with rising complexity of legal decision, they face hardly resolvable structural problems and barriers. </em><strong><em>Dr. Stephan Dreyer </em></strong><em>and</em><strong><em> Johannes Schmees</em></strong><em> explain this by reference to four technical and legal challenges. By that, a differentiated perspective is sought to be established in the emerging discourse with an eye on technical and legal realities. </em></p><p>doi: <a href="https://doi.org/10.5281/zenodo.3484550" rel="noopener noreferrer" target="_blank">10.5281/zenodo.3484550</a></p><p><strong><em>Dr. Stephan Dreyer</em></strong><em> is Senior Researcher, </em><strong><em>Johannes Schmees</em></strong><em> is Junior Researcher at the Leibniz-Institute for Media Research | Hans-Bredow-Institut. This entry is based on a forthcoming and extensive article which came to being in the context of the interdisciplinary research project “Deciding about, by and together with ADM-Systems.”</em></p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Fri, 18 Dec 2020 14:23:22 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[A New Jim Code?]]></title>
                <link>https://www.beyond-eve.com/en/events/a-new-jim-code</link>
                <description><![CDATA[<strong>Featuring Ruha Benjamin on Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life</strong>

From everyday apps to complex algorithms, technology has the potential to hide, speed, and even deepen discrimination, while appearing neutral and even benevolent when compared to racist practices of a previous era. In this talk, Ruha Benjamin presents the concept of the “New Jim Code" to explore a range of discriminatory designs that encode inequity: by explicitly amplifying racial hierarchies, by ignoring but thereby replicating social divisions, or by aiming to fix racial bias but ultimately doing quite the opposite. Ruha will also consider how race itself is a kind of tool designed to stratify and sanctify social injustice and discuss how technology is and can be used toward liberatory ends. This presentation delves into the world of biased bots, altruistic algorithms, and their many entanglements, and provides conceptual tools to decode tech promises with sociologically informed skepticism. In doing so, it challenges the audience to question not only the technologies we are sold, but also the ones we manufacture ourselves.]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:46:02 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Colonized by Data: The Costs of Connection with Nick Couldry and Ulises Mejias]]></title>
                <link>https://www.beyond-eve.com/en/events/colonized-by-data-the-costs-of-connection-with-nick-couldry-and-ulises-mejias</link>
                <description><![CDATA[<p>This talk introduces the speakers’ new book,&nbsp;<em>The Costs of Connection: How Data Colonizes Human Life and Appropriates it for Capitalism</em>&nbsp;(Stanford University Press, August 2019). Couldry and Mejias argue that the role of data in society needs to be grasped as not only a development of capitalism, but as the start of a new phase in human history that rivals in importance the emergence of historic colonialism. This new "data colonialism" is based not on the extraction of natural resources or labor, but on the appropriation of human life through data, paving the way for a further stage of capitalism. Today’s transformations of social life through data must therefore be grasped within the long historical arc of dispossession as both a new colonialism and an extension of capitalism. Resistance requires challenging once again the forms of coloniality that decolonial thinking has foregrounded for centuries. The struggle will be both broader and longer than many analyses of algorithmic power suppose, but for that reason critical responses are all the more urgent.</p><p><br></p><p><strong>Nick Couldry</strong> is a sociologist of media and culture. He is Professor of Media Communications and Social Theory at the London School of Economics and Political Science, and from 2017 has been a Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society. In fall 2018 he was also a Visiting Professor at MIT. He jointly led, with Clemencia Rodriguez, the chapter on media and communications in the 22 chapter 2018 report of the International Panel on social Progress: <a href="http://www.ipsp.org" rel="noopener noreferrer" target="_blank">www.ipsp.org</a>. His latest books are <em>The Costs of Connection</em> and <em>Media: Why It Matters </em>(Polity: October 2019). </p><p><br></p><p><strong>Ulises Ali&nbsp;Mejías</strong> is associate professor of Communication Studies and director of the Institute for Global Engagement at the State University of New York, College at Oswego. He is a media scholar whose work encompasses critical internet studies, network theory and science, philosophy and sociology of technology, and political economy of digital media. He is the author of&nbsp;<em>Off the Network: Disrupting the Digital World</em>&nbsp;(University of Minnesota Press, 2013) and various articles including ‘Disinformation and the Media: The case of Russia and Ukraine’ in Media, Culture and Society (2017, with N. Vokuev), and ‘Liberation Technology and the Arab Spring: From Utopia to Atopia and Beyond’ in Fibreculture (2012). He is the principal investigator in the Algorithm Observatory project. </p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:44:41 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Big Data and Spurious Correlations]]></title>
                <link>https://www.beyond-eve.com/en/events/big-data-and-spurious-correlations</link>
                <description><![CDATA[Big data analytics is a remarkable new field of investigation. However, the effectiveness of the new field seems to encourage an aggressive “philosophy” or “methodology” based on the dictum that “with enough data, the numbers speak for themselves”. We show, using Ramsey theory and algorithmic information theory, that this view is radically wrong. Specifically, we prove that, exactly because of their very large size, databases have to contain arbitrary correlations, most of them spurious. These correlations appear only on account of the size, not because of the nature of data. The scientific method can be enriched by computer mining over immense databases, but cannot be replaced by it.

Prof. Dr. Cristian S. Calude]]></description>
                <author><![CDATA[Collegium Helveticum]]></author>
                <pubDate>Sat, 05 Dec 2020 22:03:49 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Can Tech be Governed?]]></title>
                <link>https://www.beyond-eve.com/en/events/can-tech-be-governed</link>
                <description><![CDATA[The twenty-odd year mainstream digital revolution has transformed in the public eye from one of promise to threat. This pessimism is reflected in assessments of the latest pervasive technology: AI generally, and machine learning specifically. How different is this technology from what preceded it, and do we need new ways to govern it? If so, how would they come about?]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:41:43 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Defective computing: How algorithms use speech analysis to profile job candidates]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/defective-computing-how-algorithms-use-speech-analysis-to-profile-job-candidates</link>
                <description><![CDATA[<p><strong>Some companies and scientists present Affective Computing, the algorithmic analysis of personality traits also known as “artificial emotional intelligence”, as an important new development. But the methods that are used are often dubious and present serious risks for discrimination.</strong></p><p><strong>It was announced with some fanfare that Alexa and others would soon demonstrate breakthroughs in the field of emotion analysis. Much is written about affective computing, but products are far from market ready. For example, Amazon’s emotion assistant </strong><a href="https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions" rel="noopener noreferrer" target="_blank">Dylan</a> is said to be able to read human emotions just by listening to their voices. However, Dylan currently only exists in form of a patent.</p><p>So far, Amazon, Google et al. have not launched such products. Identifying unique signals that indicate that someone is sad seems to be a bit more complicated than they initially thought. Maybe someone’s voice sounds depressed because they are depressed, but maybe they are just tired or exhausted.</p><p>However, these difficulties do not prevent other companies from launching products that claim to have solved these complex problems by using voice and speech for character and personality analysis.</p><p>In Germany, two examples spring to mind. One is the company Precire, based in Aachen, a city on border with Belgium. Their idea: you record a voice sample, and based on the person’s choice of words, sentence structure and many other indicators, the software then produces an analysis of their character traits. The software can be used in staff recruitment or to identify candidates for promotion.</p><p>The company states that its software carries out the analysis based on a 15-minute language sample. The then CEO Mario Reis stated in an <a href="https://blog.recrutainment.de/2016/05/11/persoenlichkeitsprofil-aus-der-analyse-von-sprache-einfach-nur-creepy-oder-die-technologie-von-morgen-interview-mit-mario-reis-von-psyware-und-britta-nollmann-von-randstad/" rel="noopener noreferrer" target="_blank">interview</a> in 2016 that the results were based on science and scientifically tested. This statement is repeated in <a href="https://www.springer.com/de/book/9783658187705" rel="noopener noreferrer" target="_blank">a book</a> published in 2018. This book also cites additional studies and findings to further support the scientific grounding of the method.</p><p><em>By Veronika Thiel</em></p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Sat, 12 Dec 2020 17:26:38 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[In the War of Disinformation—Trolls Versus the Defenders of Democratic Discourse]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/processing-raw-materials-8</link>
                <description><![CDATA[<h3>Working as a think tank on behalf of the State Media Authority of North Rhine-Westphalia, iRights.Lab regularly compiles a Research Monitor on the topic of information intermediaries. The third edition of this report is now available with the title “In the War of Disinformation—Trolls Versus the Defenders of Democratic Discourse.”</h3><p><br></p><p>Especially in connection with the last elections to the European Parliament, various forms of disinformation, propagated over social networks, played an important role. This is one of the major focusses of the publication. Online “trolls” work specifically on the weakening of individual persons or opinions, and are met on the other side by the defenders of democratic discourse. Additionally, the question arises as to what role private companies play in the struggle to uphold basic democratic values. And how can or should policymakers intervene to regulate this sector? When dealing with the conflicted field of information intermediaries, it is phenomena such as fake news, hate speech and filter bubbles that come to the fore.</p><p>In this paper, we also discuss the meaning and definition of the term information intermediary. Increasingly, algorithms automatically influence people’s everyday media realities. In particular, the data that social media and other services collect from their users plays a key role in shaping the information people receive in personalized news feeds or search engine results.&nbsp;</p><p>In addition to these and related topics, this issue of the Research Monitor also deals with current research projects, for example on populism in social networks or on the difficulty of proving or disproving the existence of filter bubbles in social networks.</p><p>An upcoming event is also announced that will deal with the question of how news reaches users today and whether users come into contact with news items via social networks that otherwise would not have reached them.</p><p>You can download the entire issue of the Research Monitor <a href="https://www.medienanstalt-nrw.de/fileadmin/user_upload/lfm-nrw/Foerderung/Forschung/Dateien_Forschung/Forschungsmonitor_Informationsintermediare_3.Ausgabe.pdf" rel="noopener noreferrer" target="_blank">here</a>.</p>]]></description>
                <author><![CDATA[iRights.Lab GmbH <kontakt@irights-lab.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 17:22:23 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[„Scraping the Demos“: Political Epistemologies of Big Data]]></title>
                <link>https://www.beyond-eve.com/en/events/scraping-the-demos-political-epistemologies-of-big-data</link>
                <description><![CDATA[<p>The conference explores political epistemologies of big data. Political epistemologies are practices by which societies construct politically relevant knowledge and the criteria by which they evaluate it. Big data is the practice of deriving socially relevant knowledge from massive and diverse digital trace data. Practices such as <strong>“big data analysis”, “web scraping”, “opinion mining”, “sentiment analysis”, “predictive analytics”,</strong> and <strong>“nowcasting” </strong>seem to be common currency in the public and academic debate about the present and future of evidence-based policy making and representative democracy. Political elites see digital technologies as sources of new and better tools for learning about the citizenry, for increasing political responsiveness and for improving the effectiveness of policies. Political parties and advocacy groups use digital data to address citizens and muster support in a targeted manner; public authorities try to tailor public policy to public sentiment measured-online, forecast and prevent events (as in predictive policing, preemptive security and predictive healthcare), and continuously adapt policies based on real-time monitoring. An entire industry of policy consultants and technology companies thrives on the promise related to the political power of digital data and analytics. And finally, academic research engages in digitally enhanced computational social sciences, digital methods and social physics on the basis of digital trace data, machine learning and computer simulations.</p>]]></description>
                <author><![CDATA[Research for the Networked Society]]></author>
                <pubDate>Sat, 05 Dec 2020 21:08:52 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Code name "X, Y & Z": How the Enigma was cracked in World War II]]></title>
                <link>https://www.beyond-eve.com/en/events/code-name-x-y-z-how-the-enigma-was-cracked-in-world-war-ii</link>
                <description><![CDATA[Under the camouflage designation "X, Y & Z" French, British and Polish secret services worked together during World War II to decode the German Enigma machine. Before it could be cracked in England, code breakers worked in occupied France and continued their work for the British secret service during the Cold War.

Lecture in English with simultaneous interpreter.

Lecturer: Sir John Dermot Turing, Author, Historian and Lawyer, St Albans/United Kingdom (Dermot Turing is the nephew of the famous English mathematician Alan Turing)

Free entrance]]></description>
                <author><![CDATA[Heinz Nixdorf MuseumsForum <service@hnf.de>]]></author>
                <pubDate>Fri, 11 Dec 2020 17:22:23 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Auditing for Bias in Resume Search Engines]]></title>
                <link>https://www.beyond-eve.com/en/events/auditing-for-bias-in-resume-search-engines</link>
                <description><![CDATA[<p>There is growing awareness and concern about the role of automation in hiring, and the potential for these tools to reinforce historic inequalities in the labor market. In this work, Wilson and his team perform an algorithm audit of the resume search engines offered by several of the largest online hiring platforms, to understand the relationship between a candidate's gender and their rank in search results. They&nbsp;audit these platform with respect to individual and group fairness, as well as indirect and direct discrimination.&nbsp;</p><p>Christo Wilson's&nbsp;<a href="https://cbw.sh/index.html" rel="noopener noreferrer" target="_blank">homepage here</a></p><p>I<a href="https://cbw.sh/static/pdf/chen-chi18.pdf" rel="noopener noreferrer" target="_blank">nvestigating the Impact of Gender on Rank in Resume</a>&nbsp;by Le Chen,&nbsp;Ruijun Ma, Anikó Hannák and&nbsp;Christo Wilson</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:38:41 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Everyday Chaos]]></title>
                <link>https://www.beyond-eve.com/en/events/everyday-chaos</link>
                <description><![CDATA[<p><strong>The Internet and AI are not only changing the future, they're changing our ideas about how the future arises from the present. With this comes changes in some of our most basic and ancient strategies for surviving, managing, and thriving.</strong></p><p>In his new book,&nbsp;<em>Everyday Chaos</em>, <strong>David Weinberger</strong> points to accepted ways we work on the Internet that in fact undo our old assumptions about how the future works: rather than attempting to anticipate what will happen and prepare for it, the Internet is training us to flourish by creating more and more unfathomable possibilities. The Net has also lowered the cost of operating without principles, hypotheses, or even hunches about what will work.</p><p>AI in the form of machine learning now is providing us with a model -- a model of models --&nbsp;of how the future happens, with implications that range from how businesses make decisions to how we think about strategy, progress, explanations, morality, and even the nature of meaning itself.</p><p>These changes can be "metaphysically terrifying," Weinberger says, but ultimately are an evolutionary step of a Copernican magnitude.</p><p><br></p><h3>About David</h3><p>From the earliest days of the Web, David Weinberger has been a pioneering thought leader about the Internet’s effect on our lives, on our businesses, and most of all, on our ideas. He has contributed in a range of fields, from marketing to libraries to politics to journalism and more.</p><p>He has contributed in a remarkably wide range of ways as well: through books that explore the meaning of our new technology; as a writer for publications from&nbsp;<em>Wired</em>and&nbsp;<em>Scientific American</em>&nbsp;to&nbsp;<em>Harvard Business Review</em>&nbsp;and even&nbsp;<em>TV Guide</em>; as an acclaimed keynote speaker around the world; a strategic marketing vice president and consultant; a teacher; an Internet adviser to presidential campaigns; an early social-networking entrepreneur; a strategic marketing consultant and VP; the codirector of the groundbreaking Harvard Library Innovation Lab; a writer-in-residence at a Google AI lab; a senior researcher at Harvard’s Berkman Klein Center for Internet &amp; Society; a fellow at Harvard’s Shorenstein Center on Media, Politics and Public Policy; a Franklin Fellow at the US State Department; and always a passionate advocate for an open internet.</p><p>&nbsp;</p><h3>About Joi</h3><p>Joichi "Joi" Ito is an activist, entrepreneur, venture capitalist and scholar focusing on the ethics and governance of technology, tackling complex problems such as climate change, societal inequity and redesigning the systems that support scholarship and science. As director of the MIT Media Lab and a Professor of the Practice in Media Arts and Sciences, he supports researchers at the Media Lab to deploy design, science, and technology such AI, cryptography, and synthetic biology to transform society in substantial and positive ways.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:37:27 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[IGNITE Talks at BKC]]></title>
                <link>https://www.beyond-eve.com/en/events/ignite-talks-at-bkc</link>
                <description><![CDATA[Berkman Klein community members share their research, passions, and musings in five minute Ignite Talks. Topics include the data economy in the European Union, maternal health around the world, youth and privacy online in Latin American, Ubuntu as an ethical framework for AI, collecting secrets, and more!]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:34:47 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Having our cake and eating it too]]></title>
                <link>https://www.beyond-eve.com/en/events/having-our-cake-and-eating-it-too</link>
                <description><![CDATA[<p>It has been argued that competitive pressures could cause AI developers to cut corners on the safety of their systems. If this is true, however, why don't we see this dynamic play out more often in other private markets?</p><p>In this talk <a href="http://www.amandaaskell.com" rel="noopener noreferrer" target="_blank">Amanda Askell</a>&nbsp;-&nbsp;research scientist in ethics and policy at OpenAI<em> -&nbsp;</em>outlines the standard incentives to produce safe products: market incentives, liability law, and regulation. Askell argues that if these incentives are too weak because of information asymmetries or other factors, competitive pressure could cause firms to invest in safety below a level that is socially optimal.</p><p><br></p><p>In such circumstances, responsible AI development is a kind of collective action problem. Askell&nbsp;develops a conceptual framework to help identify levers to improve the prospects for cooperation in this kind of collective action problem.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:33:25 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Research Monitor Microtargeting]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/ten-years-after-the-global-food-price-crisis-10</link>
                <description><![CDATA[<h3>The iRights.Lab think tank produces a regular Research Monitor on behalf of the State Media Authority of North Rhine-Westphalia.</h3><p><br></p><p>German and European researchers have thus far dealt only tentatively with the topic of microtargeting in election campaigns. Most of the research projects and scientific papers on the subject are from the USA. Since Barack Obama’s election campaign in 2008 at the latest, it has become clear that both Democrats and Republicans in the US are employing massively data-driven processes in their election campaigns. In the paper State of Research: Microtargeting in Germany and Europe, we summarize the current expert debate on microtargeting in political communication, point to gaps in the research and provide suggestions on where new work is needed.</p><p>The paper was commissioned by the <strong>Landesanstalt für Medien NRW</strong>. The publication can be <a href="https://www.medienanstalt-nrw.de/fileadmin/user_upload/lfm-nrw/Foerderung/Forschung/Dateien_Forschung/Forschungsmonitoring_Microtargeting_Deutschland_Europa.pdf" rel="noopener noreferrer" target="_blank">downloaded</a> (German) free of charge from the Media Authority’s website and from our own.</p>]]></description>
                <author><![CDATA[iRights.Lab GmbH <kontakt@irights-lab.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 17:29:46 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[How to work with tech companies on human rights]]></title>
                <link>https://www.beyond-eve.com/en/events/how-to-work-with-tech-companies-on-human-rights</link>
                <description><![CDATA[<p>How can advocates, activists, and academics work with technology companies to advance human rights? When do public “name and shame” campaigns make a difference compared to confidential conversations? David Sullivan, director of learning and development at the Global Network Initiative, has spent the past decade working closely with technology companies on vexing human rights challenges, from conflict minerals in hardware supply chains to fighting censorship and surveillance online.</p><p>In this talk, he draws upon a contentious exchange with Steve Jobs about the Democratic Republic of Congo to offer insights into how companies and civil society can work together on tough issues at the intersection of technology and human rights online.</p><p>David Sullivan, Director of Learning and Development at the Global Network Initiative, is joined in conversation with Berkman Klein Fellow, Chinmayi Arun. </p><p><br></p><p>Join us for a presentation by <strong>David Sullivan, Director of Learning and Development at the Global Network Initiative, followed by him in conversation with Berkman Klein Fellow, Chinmayi Arun</strong></p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:31:44 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Dirty Data, Bad Predictions]]></title>
                <link>https://www.beyond-eve.com/en/events/dirty-data-bad-predictions</link>
                <description><![CDATA[<p>How Civil Rights Violations Impact Police Data, Predictive Policing Systems and Society This talk will explore <strong>Rashida Richardson</strong>'s recent research on the data provenance of police data commonly used in predictive policing system. The research reviews Department of Justice consent decrees and other federal court monitored settlements related to police practices to examine the link between unlawful and biased police practices and the data used to train and/or implement these systems. Rashida will discuss the findings of this research as well as the ways this "dirty data" perpetuates discriminatory police practices and creates self-reinforcing feedback loops throughout the criminal justice system and society writ large. </p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:14:30 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Constitutionalizing Speech Platforms]]></title>
                <link>https://www.beyond-eve.com/en/events/constitutionalizing-speech-platforms</link>
                <description><![CDATA[<p>We're never going to get a global set of norms for online speech, but do the platforms pick our global values and constitutionalize them? Something to tie them to the mast when hard issues arise? What would those values even be? <strong>Kate Klonick</strong> and <strong>Thomas Kadri</strong> along with panelists, <strong>Chinmayi Arun, Kendra Albert,</strong> and <strong>Jonathan Zittrain </strong>with moderation by Elettra Bietti, engage in this discussion.</p><p><br></p><p><strong>Kate Klonick</strong> is&nbsp;an Assistant Professor of Law at St. John's University Law School and an Affiliate Fellow at the&nbsp;<a href="http://isp.yale.edu/" rel="noopener noreferrer" target="_blank">Information Society Project</a>&nbsp;at Yale Law School and New America. She holds a&nbsp;JD from Georgetown University Law Center, where she&nbsp;was&nbsp;a Senior Editor at<em>&nbsp;The Georgetown Law Journal</em>&nbsp;and the Founding Editor of the&nbsp;<a href="http://georgetownlawjournal.org/glj-online/" rel="noopener noreferrer" target="_blank"><em>The Georgetown Law Journal Online</em></a>; and a PhD from Yale Law School where she&nbsp;studied under Jack Balkin, Tom Tyler, and Josh Knobe.&nbsp;Between law school and her time at Yale, she&nbsp;clerked for the Hon.&nbsp;Richard C. Wesley of the Second Circuit and the Hon.&nbsp;Eric N. Vitaliano of the Eastern District of New York.</p><p>Kate has a background in cognitive psychology which she applies to the study of emerging issues in law and technology. Specifically, this has included research and work on the Internet's effect on freedom of expression and private platform governance. She writes and works on issues related to online shaming, artificial intelligence, robotics, content moderation, algorithms, privacy, and intellectual property.</p><p>Her work on these topics has appeared<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2937985" rel="noopener noreferrer" target="_blank">&nbsp;in the&nbsp;<em>Harvard Law Review</em></a>,&nbsp;<em>Maryland Law Review</em>,&nbsp;<em>New York Times</em>,&nbsp;<em>The Atlantic</em>,&nbsp;<em>Slate</em>,&nbsp;<em>The Guardian&nbsp;</em>and numerous other publications.&nbsp;</p><p><br></p><p><strong>Thomas Kadri</strong>&nbsp;is a Ph.D. candidate at Yale Law School, a Resident Fellow at the Yale Information Society Project, and a Mellon Fellow.&nbsp;His research looks at the impact of networked technologies on criminal and tort law, with a particular focus on the constitutional implications of cybersecurity and content moderation on online platforms.&nbsp;He is currently working on an article about how platforms are using anti-hacking laws like the Computer Fraud and Abuse Act to police “public” parts of the internet.&nbsp;His work has been published or is forthcoming in the&nbsp;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2936283" rel="noopener noreferrer" target="_blank"><em>Michigan Law Review</em></a>, the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3332530" rel="noopener noreferrer" target="_blank"><em>Southern California Law Review</em></a>, the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3247273" rel="noopener noreferrer" target="_blank"><em>Maryland Law Review</em></a>,&nbsp;the&nbsp;<a href="https://www.nytimes.com/2018/11/17/opinion/facebook-supreme-court-speech.html" rel="noopener noreferrer" target="_blank"><em>New York Times</em></a>, and&nbsp;<a href="https://slate.com/technology/2018/11/facebook-zuckerberg-independent-speech-content-appeals-court.html" rel="noopener noreferrer" target="_blank"><em>Slate</em></a>.&nbsp;He is also an Adjunct Professor at New York Law School, where he teaches Cybercrime.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:12:42 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Machines Learning to Find Injustice]]></title>
                <link>https://www.beyond-eve.com/en/events/machines-learning-to-find-injustice</link>
                <description><![CDATA[<strong>Featuring HLS Climenko Fellow and Lecturer on Law, Ryan Copus</strong>
Predictive algorithms can often outperform humans in making legal decisions. But when used to automate or guide decisions, predictions can embed biases, conflict with a "right to explanation," and be manipulated by litigants. We should instead use predictive algorithms to identify unjust decisions and subject them to secondary review. 

*This event will be live webcast here at noon on event date.*]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:10:52 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Waking Up to the Internet Platform Disaster]]></title>
                <link>https://www.beyond-eve.com/en/events/waking-up-to-the-internet-platform-disaster</link>
                <description><![CDATA[<p>Join us for a conversation with <strong>Roger McNamee, author of Zucked: Waking Up to the Facebook and Lawrence Lessig</strong>, the Roy L. Furman Professor of Law and Leadership at Harvard Law School. Facebook, Google and other internet platforms employ a business model – surveillance capitalism – that is undermining public health, democracy, privacy, and innovation in unprecedented ways. They use persuasive technology to manipulate attention for profit. They use surveillance to build data sets with the goal of influencing user behavior. The negative externalities of internet platforms are analogous to those of medicine in the early 20th century and chemicals in the mid-20th century, situations that required substantial regulatory intervention.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:09:53 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Smart Enough City]]></title>
                <link>https://www.beyond-eve.com/en/events/the-smart-enough-city</link>
                <description><![CDATA[Putting Technology in Its Place to Reclaim Our Urban Future
Smart cities, where technology is used to solve every problem, are hailed as futuristic urban utopias. We are promised that apps, algorithms, and artificial intelligence will relieve congestion, restore democracy, prevent crime, and improve public services. In The Smart Enough City, <strong>Ben Green</strong> warns against seeing the city only through the lens of technology; taking an exclusively technical view of urban life will lead to cities that appear smart but under the surface are rife with injustice and inequality. He proposes instead that cities strive to be “smart enough”: to embrace technology as a powerful tool when used in conjunction with other forms of social change—but not to value technology as an end in itself.

*Time: 12:00 PM - 1:15 PM ET*
*This event will be live webcast on this page at noon on the event date.*]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:08:31 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Cyberlaw and Human Rights]]></title>
                <link>https://www.beyond-eve.com/en/events/cyberlaw-and-human-rights</link>
                <description><![CDATA[<p>After two decades of little direct legislation of the internet, national laws and related court decisions meant to govern cyberspace are rapidly proliferating worldwide. They are becoming building blocks in new legal frameworks that will shape the evolution of Internet governance and policymaking for years to come.</p><p>In the Global South and particularly under repressive regimes, these frameworks can be imposed with little regard for human rights obligations and without a full understanding of the technologies and processes they regulate or their implications for the preservation of the core values of the internet: interoperability, universality, and free expression and the free flow of information.</p><p>This panel brings together practitioners from five international organizations monitoring the development of legislation and case law related to cyberspace to discuss the implications for the future of human rights online.</p><p><br></p><h2>Panelists</h2><p><em>Moderator</em></p><p><strong>Robert Faris</strong> is the research director at the Berkman Klein Center where he contributes and provides oversight to research at the center. His research includes the study of digital communication mechanisms by civil society organizations and social movements, and the emergence and impact of digitally-mediated collective action, as well as the influence of networked digital technologies on democracy and governance and the evolving role of new media in political change.</p><p><br></p><p><strong>Dr. Hawley Johnson</strong> is the Project Manager for Columbia Global Freedom of Expression, an initiative to advance the understanding of international and national norms and institutions that best protect the free flow of information and expression in an interconnected global community. Hawley has over twelve years of experience in international media development both academically and professionally, with a focus on Eastern Europe. She recently worked with the award winning Organized Crime and Corruption Reporting Project to launch the Investigative Dashboard (ID), a joint effort with Google Ideas offering specialized databases and research tools for journalists in emerging democracies.</p><p><br></p><p><strong>Robert Muthuri</strong> is currently a Research Fellow – ICT at the Centre for IP and IT (CIPIT) at the Strathmore School of Law. He is a Legal Knowledge Engineer working at the intersection of legal theory and AI. Robert is an Advocate of the High Court of Kenya who, with the conviction that technology had a lot more to offer the legal domain, further pursued a career in legal informatics. </p><p><br></p><p><strong>Juan Carlos Lara</strong> is a Chilean lawyer, specializing in law and technology, currently working as the manager of the Public Policy and Research team at Derechos Digitales, a non governmental organisation based in Santiago de Chile that promotes and defends digital rights in Latin America. He has worked as a consultant in intellectual property for public and private entities, has been a research assistant at the Centre of Studies in Cyber Law at the University of Chile, and is currently an LL.M. candidate at UC Berkeley. In Derechos Digitales, he leads research and policy analysis on technology and data privacy, equality, freedom of expression, and access to knowledge and human rights in online platforms.</p><p><br></p><p><strong>Gayatri Khandhadai</strong> is a lawyer with a background in international law and human rights, international and regional human rights mechanisms, research, and advocacy. She previously worked with national and regional human rights groups, focusing on freedom of expression. She coordinated the IMPACT — India, Malaysia, Pakistan Advocacy for Change through Technology — project with the Association for Progressive Communications. Her current focus is on digital rights in Asia with specific emphasis on freedoms of expression, assembly, and association on the Internet.</p><p><br></p><p><strong>Jessica Dheere</strong> is co-founder of the Beirut–based digital rights research, training, and advocacy organization SMEX (smex.org) and a 2018-19 research fellow at the Berkman Klein Center for Internet and Society at Harvard University. She is also incubating director of the recently launched CYRILLA Collaborative (cyrilla.org), a global initiative that maps and analyzes the emergence and evolution of legal frameworks in digitally networked environments through open research, data models, and databases.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:07:58 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Ethics and Accountability of Algorithmic Decision Making Systems]]></title>
                <link>https://www.beyond-eve.com/en/events/ethics-and-accountability-of-algorithmic-decision-making-systems</link>
                <description><![CDATA[<p>The new 'sexiest job on earth' according to former Google-CEO Schmidt is the "Data Scientist". As data scientists we are entitled to crawl through data, finding patterns that can predict the future. Our tool set is huge and increases every day, foremost by methods from machine learning. In many cases, the question to be solved by our learning systems are clear cut and so is the quality measure by which we can evaluate whether the systems are good enough to be applied. However, neither is the case when we build systems to predict future human behavior or to classify current human behavior. For this, 1) intricate social concepts have to be quantified ("operationalization"), 2) it is often unclear how to define a "good decision", and 3) it is hard to observe whether the system embedded in a social system will actually improve the latter or not. While these problems are often discussed under the term "ethics of algorithms", I will argue that a large part of it is actually a question of accountability. As a community of computer and data scientists, we will have to make sure that we only decide on those parts of these systems for which we are trained - and include the expertise of psychologists, sociologists, lawyers, and politicians where this is not the case. I will show a framework that helps to sort these two aspects and to thus avoid mistakes in building learning algorithmic decision making systems. </p><p><br></p><p>Referentin: <strong>Prof. Dr. Katharina Zweig</strong>, TU Kaiserslautern</p>]]></description>
                <author><![CDATA[Fachgruppe Frauen in der Gesellschaft für Informatik (GI) - IT-Frauen im Rhein-Neckar-Dreieck]]></author>
                <pubDate>Sat, 05 Dec 2020 21:40:49 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Datensicherheit im Netz – Einführung in die Informationssicherheit]]></title>
                <link>https://www.beyond-eve.com/en/events/datensicherheit-im-netz-einfuhrung-in-die-informationssicherheit</link>
                <description><![CDATA[Eine Nachricht im Internet wird auf ihrem Weg bis zum Zielsystem durch mehrere Netze und über unterschiedliche Stationen geschickt. Die einzelnen Stationen sind dafür verantwortlich, dass die Nachricht ordnungsgemäß weitergeleitet und schließlich dem korrekten Empfänger zugestellt wird. Jede dieser Stationen kann die Nachricht, falls sie im Klartext verschickt wird, empfangen und den Inhalt lesen. Somit kann ein potenzieller Angreifer, falls er eines dieser Zwischensysteme kontrolliert, den Inhalt der Nachricht ebenfalls lesen und sogar vor dem Weitersenden verändern. Solche Angriffe können extreme Auswirkungen auf die Kommunikation haben, da nun Informationen nicht mehr vertraulich sind und auch die Glaubwürdigkeit der Nachricht nicht mehr festgestellt werden kann.]]></description>
                <author><![CDATA[Universität Potsdam - Hasso-Plattner-Institut <hpi-info@hpi.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:37 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Centralization - The curse of data-centric digital systems?]]></title>
                <link>https://www.beyond-eve.com/en/events/centralization-the-curse-of-data-centric-digital-systems</link>
                <description><![CDATA[In this talk we look at the successful software architectures that have been developed in the past decades for data-centric digital systems within organizations, collaboration between organizations, and as data-centric platforms for service ecosystems in commerce, finance, mobility, energy and health. They all exhibit a strong tendency towards a hierarchical and centralized structure. We identify the driving forces, benefits and beneficiaries of such architectures but also point out their intrinsic disadvantages and threats not only from a technical but more importantly also from a legal, political and ethical perspective. As a consequence, we call for interdisciplinary (social, legal, economic, technical) design research to foster more decentralized, cooperative, federated, peer-to-peer, or user-centered digital data-management architectures.

Since 2002 <strong>Florian Matthes</strong> holds the chair for Software Engineering for Business Information Systems at Technische Universität München. The current focus of his research is on technologies driving the digital transformation of enterprises and societies: Enterprise architecture management, service platforms and their ecosystems, semantic analysis of legal texts and executable contracts on blockchains.]]></description>
                <author><![CDATA[Institut für Informatik]]></author>
                <pubDate>Sat, 05 Dec 2020 21:11:55 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Comment about Mark Zuckerbergs „Independent Governance and Oversight“]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/comment-about-mark-zuckerbergs-independent-governance-and-oversight</link>
                <description><![CDATA[<p><strong>Why Zuckerberg’s “Independent Governance and Oversight” board is not gonna fly (but some of his other ideas are at least worth discussing)</strong> Mark Zuckerberg is all for regulation of social media all of sudden. What’s wrong with that picture? In an almost 5,000 word “blog post”, Zuckerberg (plus we assume two dozen or so of the company’s public policy hacks and lawyers) has laid out Facebook’s idea of how to deal with the crisis the company is facing. The article’s titled “A Blueprint for Content Governance and Enforcement” and structured in 9 parts:</p><p><br></p><p>1. Community Standards</p><p>2. Proactively Identifying Harmful Content</p><p>3. Discouraging Borderline Content</p><p>4. Giving People Control and Allowing More Content</p><p>5. Addressing Algorithmic Bias</p><p>6. Building an Appeals Process</p><p>7. Independent Governance and Oversight</p><p>8. Creating Transparency and Enabling Research</p><p>9. Working Together on Regulation</p><p>...</p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Sat, 12 Dec 2020 21:21:10 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Show Me Your Data and I’ll Tell You Who You Are]]></title>
                <link>https://www.beyond-eve.com/en/events/show-me-your-data-and-ill-tell-you-who-you-are</link>
                <description><![CDATA[The Oxford Internet Institute is excited to present OII faculty member Dr Sandra Wachter for the talk "Show Me Your Data and I'll Tell You Who You Are" in London.

We know that Big Data and algorithms are increasingly used to assess and make decisions about us. Algorithms can infer our sexual orientation, political stances, and health status. They also decide what products or newsfeeds are shown as well as if we get hired or promoted, if we get a loan, we get insurance or if we are admitted to university.

These data-driven decisions are shaping our identities, reputation and steer our path in life. But is it fair and just how we are assessed? This talk will explain why we need “a right to reasonable inferences” to retain control over how we are ‘seen’ in a Big Data world and to make sure that the data used to assess us is relevant, accurate and reflect who we really are.

Speaker: <strong>Dr. Sandra Wachter</strong> is a lawyer and Research Fellow in data ethics, AI, robotics and Internet regulation/cyber-security at the Oxford Internet Institute. Sandra is also a Fellow at the Alan Turing Institute in London and a member of the Law Committee of the IEEE. She serves as a policy advisor for governments, companies, and NGO’s around the world on regulatory and ethical questions concerning emerging technologies. Her work has been featured in (among others) The Telegraph, Financial Times, The Sunday Times, The Economist, Science, BBC, The Guardian, Le Monde, New Scientist, and, WIRED. In 2018 she won the ‘O2RB Excellence in Impact Award’ and in 2017 the CognitionX ‘AI superhero Award’ for her contributions in AI governance.]]></description>
                <author><![CDATA[The Oxford Internet Institute <enquiries@oii.ox.ac.uk>]]></author>
                <pubDate>Sat, 05 Dec 2020 22:06:03 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AI NOW 2018 Symposium]]></title>
                <link>https://www.beyond-eve.com/en/events/ai-now-2018-symposium</link>
                <description><![CDATA[<strong>Ethics, Organizing, and Accountability</strong>

Over the past year, research and advocacy have continued to expose bias, error, and misuse of Artificial Intelligence technologies–from law enforcement’s use of facial recognition to healthcare algorithms that drastically and erroneously cut benefits for the sick. Yet even in the face of increased public awareness and concern, the rapid adoption of these systems across sensitive social and political domains continues, with little oversight or transparency.

Efforts to grapple with these challenges frequently focus on the importance of “AI Ethics,” but questions remain about how to translate ethical promises into meaningful accountability. In parallel, we have also witnessed a shift in urgency and tactics, as academics, advocates, and tech workers organize against the unchecked influence and impact of AI systems.

The AI Now 2018 Symposium addressed the intersection of AI, ethics, organizing, and accountability–examining the landmark events of the past year that have brought these topics squarely into focus. What can we learn from them and where is there more work to be done?]]></description>
                <author><![CDATA[New York University - AI Now Institute]]></author>
                <pubDate>Fri, 04 Dec 2020 21:21:08 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Network Propaganda]]></title>
                <link>https://www.beyond-eve.com/en/events/network-propaganda</link>
                <description><![CDATA[<p>Is social media destroying democracy? Are Russian propaganda or "Fake news" entrepreneurs on Facebook undermining our sense of a shared reality? A conventional wisdom has emerged since the election of Donald Trump in 2016 that new technologies and their manipulation by foreign actors played a decisive role in his victory and are responsible for the sense of a "post-truth" moment in which disinformation and propaganda thrives.</p><p><br></p><p><em>Network Propaganda</em>&nbsp;challenges that received wisdom through the most comprehensive study yet published on media coverage of American presidential politics from the start of the election cycle in April 2015 to the one year anniversary of the Trump presidency. Analysing millions of news stories together with Twitter and Facebook shares, broadcast television and YouTube, the book provides a comprehensive overview of the architecture of contemporary American political communications. Through data analysis and detailed qualitative case studies of coverage of immigration, Clinton scandals, and the Trump Russia investigation, the book finds that the right-wing media ecosystem operates fundamentally differently than the rest of the media environment. </p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:03:56 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Antisocial Media]]></title>
                <link>https://www.beyond-eve.com/en/events/antisocial-media</link>
                <description><![CDATA[<p><strong>How Facebook Disconnects Us and Undermines Democracy</strong> This event is sponsored by Harvard Kennedy School's Shorenstein Center on Media, Politics and Public Policy and the Berkman Klein Center for Internet &amp; Society at Harvard University. Speaker: <strong>Siva Vaidhyanathan</strong> is the Robertson Professor of Media Studies and director of the Center for Media and Citizenship at the University of Virginia. He is the author of Antisocial Media: How Facebook Disconnects Us and Undermines Democracy (2018), Intellectual Property: A Very Short Introduction (2017), The Googlization of Everything — and Why We Should Worry (2011), </p><p><br></p><p>Copyrights and Copywrongs: The Rise of Intellectual Property and How it Threatens Creativity (2001), and The Anarchist in the Library: How the Clash between Freedom and Control is Hacking the Real World and Crashing the System (2004). He also co-edited (with Carolyn Thomas) the collection, Rewiring the Nation: The Place of Technology in American Studies (2007). Vaidhyanathan has written for many periodicals, including The New York Times, Bloomberg View, American Scholar, Dissent, The Chronicle of Higher Education, The New York Times Magazine, Slate.com, BookForum, Columbia Journalism Review, Washington Post, The Guardian, Esquire.com, The Virginia Quarterly Review, The New York Times Book Review, and The Nation.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 20:02:23 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Please stand back! User acceptance in automated vehicles]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/please-stand-back-user-acceptance-in-automated-vehicles</link>
                <description><![CDATA[<p><strong>As part of the iKoPA project, Fraunhofer FOKUS is developing an interactive simulator for the virtual testing of automated driving and for the analysis of user acceptance. To this end, various driving scenarios were created in which students of the TU Berlin had to monitor the system behavior and intervene in difficult situations. The student with the best test drive result was awarded a prize.</strong></p><p><strong><span class="ql-cursor">﻿</span></strong>In order for research and development of fully automated vehicles to continue in Germany, the acceptance of potential users of automated vehicles is crucial. As part of the iKoPA research initiative, a simulator for the virtual testing of automated driving is being developed with which users can experience various driving scenarios in a realistic car cockpit. As in a real automated vehicle, the steering wheel turns on its own in the simulator, while the user sees a virtual journey through a 3D world, projected onto a large screen. Various driving scenarios can be simulated, such as an undisturbed drive without other road users under ideal conditions. However, it is also possible to simulate complex traffic situations in which user intervention is necessary to prevent a collision with sudden obstacles. In addition, system errors such as unexpected rapid increases in speed, close driving due to sensor inaccuracies or hacker attacks can also be simulated.</p>]]></description>
                <author><![CDATA[Fraunhofer-Gesellschaft - Institute for Open Communication Systems <info@fokus.fraunhofer.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 21:08:39 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Hintergrundgespräch: Predictive Policing in Deutschland]]></title>
                <link>https://www.beyond-eve.com/en/events/hintergrundgesprach-predictive-policing-in-deutschland</link>
                <description><![CDATA[Polizeibehörden in sechs Bundesländern arbeiten derzeit mit unterschiedlichen algorithmischen Systemen, die Vorhersagen zu zukünftigen Kriminalitätsschwerpunkten erlauben sollen. Dieses “Predictive Policing” ist umstritten: Einerseits sorgen solche Systeme in den Behörden dafür, dass die eigene Polizeiarbeit analysiert und verbessert wird. Andererseits führt Predictive Policing – insbesondere wenn gleichzeitig Polizeibefugnisse erweitert werden – zu einem grundlegenden Wandel der polizeilichen Arbeit, der kritisch zu hinterfragen ist. Wie kann diese Technik angewendet werden, ohne dabei Grundrechte einzuschränken oder das Prinzip der Unschuldsvermutung auszuhebeln?

Joachim Eschemann, Leiter des Referats für Kriminalitätsangelegenheiten im Düsseldorfer Innenministerium, war in Nordrhein-Westfalen für die Entwicklung des Predictive-Policing-Systems SKALA verantwortlich. Im Rahmen eines Hintergrundgesprächs am 30.8.2018 um 18:30 Uhr spricht Dr. Tobias Knobloch mit ihm darüber, wie Kriminalitätsprognosen im Alltag der Polizei eingesetzt werden, welche Daten verwendet werden sowie über Erfolge und Schwierigkeiten des Projekts SKALA in NRW.]]></description>
                <author><![CDATA[Stiftung Neue Verantwortung e. V. <info@stiftung-nv.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[“Cultural Heritage and Digitization”]]></title>
                <link>https://www.beyond-eve.com/en/events/cultural-heritage-and-digitization</link>
                <description><![CDATA[<p>The <strong>"Cultural Heritage and Digitization"</strong> Symposium is taking place on Thursday, <strong>August 30, 2018</strong> at the Fraunhofer Institute for Computer Graphics Research IGD. As part of the European Year of Cultural Heritage, Fraunhofer IGD, in cooperation with the City of Science Darmstadt, invites national and international experts to come together to discuss future-oriented solutions for preserving cultural heritage.</p><p><br></p><p>The event will be combined with the local award ceremony of the <a href="http://www.europeanheritageawards.eu/winner_year/2018/" rel="noopener noreferrer" target="_blank"><strong>EU Prize for Cultural Heritage/Europa Nostra Award 2018</strong></a>, with which Fraunhofer IGD is being recognized this year for its CultLab3D project. The CultLab3D team has already been celebrating since June 22, when their project was honored in the official ceremony during the European Cultural Heritage Summit in Berlin: the European Commission and Europa Nostra honored 29 winners from 17 countries for their outstanding achievements in preservation, research, volunteering, education, training, and raising awareness. <a href="https://www.igd.fraunhofer.de/en/press/news/cultlab3d-receives-prize" rel="noopener noreferrer" target="_blank"><strong>CultLab3D was among the winners of the prize</strong> in the <strong>Research category</strong></a>. Critical to this success is the forward-looking technology, which automatically produces 3D scans in the highest resolution – with unprecedented speed.</p>]]></description>
                <author><![CDATA[Fraunhofer-Gesellschaft - Institute Computer Graphics Research <info@igd.fraunhofer.de>]]></author>
                <pubDate>Sat, 05 Dec 2020 22:07:10 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[21. informatica feminale]]></title>
                <link>https://www.beyond-eve.com/en/events/21-informatica-feminale</link>
                <description><![CDATA[<strong>Focus 2018 “Gender politics at engineering workplaces”</strong>

Courses and talks on topics like gender, equality, technology and ethics are on special focus this year.
Companies with gender sensible organizational concepts and successful personal management strategies to promote women engineers to a broad spectrum of leading positions will present their best practices to the participants of Informatica Feminale.]]></description>
                <author><![CDATA[Kompetenzzentrum Frauen in Naturwissenschaft und Technik <oechtering@uni-bremen.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:40 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Ethics and algorithmic processes for decision making and decision support]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/ethics-and-algorithmic-processes-for-decision-making-and-decision-support</link>
                <description><![CDATA[<p>Far from being a thing of the future, automated decision-making informed by algorithms (ADM) is already a widespread phenomenon in our contemporary society. It is used in contexts as varied as advanced driver assistance systems, where cars are caused to brake in case of danger, and software packages that decide whether or not a person is eligible for a bank loan. Actions of government are also increasingly supported by ADM systems, whether in “predictive policing” or deciding whether a person should be released from prison. What is more, ADM is only just in its infancy: in just a few years’ time, every single person will be affected daily in one way or another by decisions reached using algorithmic processes. Automation is set to play a part in every area of politics and law.</p><p>Current ethical debates about the consequences of automation generally focus on the rights of individuals. However, algorithmic processes – the major component of automated systems – exhibit a collective dimension first and foremost. This can only be addressed partially at the level of individual rights. For this reason, existing ethical and legal criteria are not suitable (or, at least, are inadequate) when considering algorithms generally. They lead to a conceptual blurring with regard to issues such as privacy and discrimination, when information that could potentially be misused to discriminate illegitimately is declared private. Our aim in the present article is, first, to bring a measure of clarity to the debate so that such blurring can be avoided in the future. In addition to this, we discuss ethical criteria for technology which, in the form of universal abstract principles, are to be applied to all societal contexts.</p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Sat, 12 Dec 2020 21:54:14 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Shaping Consumption]]></title>
                <link>https://www.beyond-eve.com/en/events/shaping-consumption</link>
                <description><![CDATA[<p><strong>How Social Network Manipulation Tactics Are Impacting Amazon and Influencing Consumers</strong> The issue of narrative manipulation online is a major concern for social media platforms like Facebook and Twitter. In her recent talk, Renee Diresta explained how such behavior has become equally alarming on Amazon. In Amazon’s increasingly crowded marketplace, reviews have become more important for sellers. As such, companies now look for ways to “game the reviews.” Narrative manipulation can take many forms, including manufactured consensus, brigading, harassment, sock puppet accounts, and news voids. In her research, Diresta has found the creation of positive consensus about products on Amazon is sophisticatedly organized. </p><p><br></p><p>For example, potential reviewers belong to Facebook groups (often with names like “deals” to obscure their true purpose) in which they can be contacted by sellers. Sellers offer them a free product, or sometimes even pay, in exchange for positive product reviews. In order to not draw attention to these incentivized reviews—which have already been banned—sellers instruct reviewers to do things like add similar products to their wish lists, and pay upfront, but accept reimbursement via PayPal at a later date. They also seek out reviewers with long-term Amazon accounts, as this adds credibility to their reviews. </p><p><br></p><p><strong>Renee DiResta </strong>is the Director of Research at New Knowledge, and Head of Policy at nonprofit Data for Democracy. Renee investigates the spread of disinformation and manipulated narratives across social networks, and assists policymakers in understanding and responding to the problem.</p>]]></description>
                <author><![CDATA[Berkman Klein Center for Internet & Society]]></author>
                <pubDate>Sat, 05 Dec 2020 19:59:58 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[BigBrotherAwards 2018]]></title>
                <link>https://www.beyond-eve.com/en/events/bigbrotherawards-2018</link>
                <description><![CDATA[Seit 2000 organisiert Digitalcourage e.V. die BigBrotherAwards in Deutschland, „die Oscars für Überwachung“ (Le Monde). Durch die BigBrotherAwards wurden u.a. die Payback-Karte als Datensammelkarte, die Urintests an Auszubildenden bei der Bayer AG, die Machenschaften beim Mautsystem von TollCollect und Tchibos schwunghafter Handel mit Kundendaten bekannt gemacht. Außerdem haben wir aufgedeckt, dass die Metro Group RFID-Chips in den Kundenkarten versteckt hatte und nachgewiesen, warum Facebook gefährlich ist.

Die BigBrotherAwards sind dabei oft ihrer Zeit voraus. Der Skandal über die Überwachung der Angestellten bei Lidl wurde erst ein Jahr nach unserer Auszeichnung in der breiten Öffentlichkeit bekannt. Als Rena Tangens und padeluun im Jahr 2013 forderten "Google muss zerschlagen werden", war das eine radikale Forderung, die erst 2014 auch von Politiker.innen und Journalist.innen aufgegriffen wurde.

Seit den Enthüllungen von Edward Snowden sind die BigBrotherAwards keineswegs entbehrlich geworden. Jedes Jahr legen wir erneut den Finger in die Wunde und setzen Maßstäbe. Damit wirken wir in Gesellschaft und Politik.]]></description>
                <author><![CDATA[digitalcourage <mail@digitalcourage.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:33:33 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Künstliche Intelligenz in allen Lebensbereichen. Wie würden sie entscheiden.]]></title>
                <link>https://www.beyond-eve.com/en/events/kunstliche-intelligenz-in-allen-lebensbereichen-wie-wurden-sie-entscheiden</link>
                <description><![CDATA[Die Entwicklung von Künstlicher Intelligenz (KI) hat in den letzten zwei Jahren rasante Fortschritte gemacht und wird in absehbarer Zeit für Veränderungen in den unterschiedlichsten Lebensbereichen verantwortlich sein. Denn durch die anwachsenden Einsatzfelder von KI ergeben sich eine Vielzahl von neuen Handlungsoptionen, die über das Potenzial verfügen, mit unterschiedlicher Intensität und Geschwindigkeit die bisher gewohnten Arbeitsstrukturen und Verfahrensabläufe positiv zu verändern.

Durch einen zunehmenden Einsatz von KI werden aber auch Fragen der Akzeptanz relevant. Es geht darum zu definieren, was KI darf und wo diese Technologie Grenzen benötigt, um von denen akzeptiert zu werden, die durch diese Technologie berührt werden. Hier gibt es verschiedene Anspruchsgruppen, beispielsweise den Arzt und seine Patienten, Verwaltungsbeamte und Bürger oder Arbeitgeber und Arbeitnehmer, deren jeweilige Interessen sowie Norm- und Wertvorstellungen es in der ethischen Diskussion abzuwägen gilt.

Anhand konkreter Denkimpulse zu den Bereichen Arbeit, Gesundheitswesen und Verwaltung wollen wir mit Fachleuten und interessiertem Publikum diskutieren, welche Fragen durch die schnell wachsenden Einsatzmöglichkeiten von Künstlicher Intelligenz in unserem zukünftigem Alltag aufgeworfen werden.]]></description>
                <author><![CDATA[Initiative D21 <roland.dathe@initiatived21.de>]]></author>
                <pubDate>Tue, 01 Dec 2020 15:34:12 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Das Internet der Dinge als Technologische Herausforderung]]></title>
                <link>https://www.beyond-eve.com/en/events/das-internet-der-dinge-als-technologische-herausforderung</link>
                <description><![CDATA[Liebe IT-Frauen und auch Männer,

wir laden euch ganz herzlich zum nächsten Treffen mit Vortrag ein:

Thema: Das Internet der Dinge als Technologische Herausforderung und Treiber für die Digitalisierung in Unternehmen.

In 2020 wird es pro Person auf der Erde sechs angeschlossene Sensoren geben. Der Trend zur Digitalisierung ist somit schon Realität. Der Vortrag beleuchtet zwei Dinge: technische Aspekte und Herausforderungen des Internet der Dinge (Internet of Things, IoT), wie auch Einsatzszenarien bei Kunden, die ihr Geschäft und Unternehmen entweder effizienter und effektiver gestalten oder in innovative Geschäftsmodelle investieren und aus der Digitalisierung Vorteile ziehen.

Unsere Referentin Eva Zauke ist Diplom-Informatikerin und Betriebswirtin und arbeitet als Development Executive bei der SAP SE in Walldorf. Sie ist Chief Operating Officer des Bereichs IoT & Digital Supply Chain und verantwortet zudem die Entwicklung der Benutzeroberflächen für IoT, die Cloud Delivery Prozesse, wie auch die beiden Startup Acceleratoren in Berlin und Palo Alto für IoT. In früheren Rollen bei der SAP war sie als Vice President verantwortlich für die Produktdefinition und Produktmanagement von SAP Technologie Produkten (SAP NetWeaver, Business Objects), sowie in unterschiedlichen Bereichen der SAP tätig (Produktentwicklung, Go-to-Market, Rapid Deployment Solutions).

Vor der SAP war sie in unterschiedlichen Rollen in der IT bei der Deutschen Bahn und der Deutschen Post, sowie bei ORACLE Consulting tätig.

Nach dem Vortrag gibt es noch die Möglichkeit zu einem zwanglosen Austausch im nahe gelegenen Bräustübel.

Ich freue mich auf euer Kommen und insbesondere auf neue Gesichter!

Viele Grüße
Kerstin Lambert]]></description>
                <author><![CDATA[Fachgruppe Frauen in der Gesellschaft für Informatik (GI) - IT-Frauen im Rhein-Neckar-Dreieck]]></author>
                <pubDate>Tue, 01 Dec 2020 15:34:18 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Queen's Lecture 2017]]></title>
                <link>https://www.beyond-eve.com/en/events/queens-lecture-2017</link>
                <description><![CDATA[<strong>Prof. Zoubin Ghahramani: „Artificial intelligence and machine learning: from understanding computation in the brain to building self-driving cars“ </strong>

What is intelligence? What is learning? Can we build computers and robots that learn? How much information does the brain store? How does mathematics help us answer these questions?

In this year’s Queen’s Lecture, Professor Zoubin Ghahramani will take you on a journey through the world of machine learning - the invisible algorithms that underlie many of the tools we use every day. These learning algorithms are used to build systems that recognize human speech, translate between languages, recognize faces and detect emotions, customize advertising, recommend products, make financial trading decisions, detect credit-card fraud and email spam, and optimize logistics and transport.
Increasingly these learning algorithms will also help analyze clinical data, make personalized treatment decisions, analyze scientific data and suggest experiments, optimize food production and energy consumption, create new works of music and art, make sense of legal texts, and power self-driving cars, autonomous urban aviation and robots.

It is hard to imagine any area of human life that will not be affected by advances in machine learning.

<strong>Zoubin Ghahramani</strong> FRS, is professor of information engineering at the University of Cambridge and chief scientist at Uber. He is also deputy director of the Leverhulme Centre for the Future of Intelligence and a fellow of St. John's College, Cambridge.

The Queen's Lectures are supported by the British Embassy and the British Council Germany. 
The lecture will be held in English.]]></description>
                <author><![CDATA[Technische Universität Berlin]]></author>
                <pubDate>Fri, 04 Dec 2020 12:49:34 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[AI NOW 2017 Symposium]]></title>
                <link>https://www.beyond-eve.com/en/events/ai-now-2017-symposium</link>
                <description><![CDATA[The AI Now 2017 Symposium was designed to address the biggest challenges we face as AI moves further into our everyday lives. This was the second annual Symposium hosted by the AI Now, with generous support from the AI Ethics and Governance Fund, John D. and Catherine T. MacArthur Foundation, and MIT Media Lab.

The 2017 Symposium brought together over 100 leading experts from industry, academia, civil society, and government to share ideas for technical design, research, and policy directions. Discussions this year focused on the application of AI across four key themes: Rights and Liberties, Labor and Automation, Bias and Inclusion, and Ethics and Governance. These experts spent a day in closed-door talks and discussions, then joined an evening program that was free and open to the public.

You can watch videos of the talks and panel discussions from both events below.

The AI Now 2017 Report provides of summary of the Symposium’s four focus areas with close attention to developments that have occurred in the last 12 months. This 2017 Report also incorporates key insights and high-level recommendations that emerged from discussions at the Symposium.]]></description>
                <author><![CDATA[New York University - AI Now Institute]]></author>
                <pubDate>Fri, 04 Dec 2020 21:20:51 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[The Effects of Digitalization on Gender Equality in the G20 Economies]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/the-effects-of-digitalization-on-gender-equality-in-the-g20-economies</link>
                <description><![CDATA[This study investigates how the digital revolution, which is characterized by artificial intelligence, big data, cloud computing and mobile robotics, will affect gender equality in G20 countries, and how governments and non-governmental initiatives may exploit the new digital technologies to narrow these gender gaps in the future. The study focuses on four areas to derive its policy recommendations. First, it assesses if digital technologies will affect gender equality in the foreseeable future by replacing women’s jobs to a different extent than men’s jobs. Second, it determines the state of the art in gender equality and gender-oriented policies in labor markets, financial inclusion and entrepreneurship in the G20 countries. Third, it identifies deficits in women’s digital inclusion that may impair the effectiveness of digitally empowered gender policies. It also shows how digital technologies may empower women. And fourth, it provides three detailed case studies: on gender policies in two selected countries, India and South Africa, and on digitally empowered strategies for reducing the gender gap in angel investment.]]></description>
                <author><![CDATA[IFW Kiel  Institut für Weltwirtschaft <info@ifw-kiel.de>]]></author>
                <pubDate>Sat, 12 Dec 2020 19:31:03 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Linked Data Engineering]]></title>
                <link>https://www.beyond-eve.com/en/events/linked-data-engineering</link>
                <description><![CDATA[<p>We are surrounded by data everywhere. By helping us to make better decisions, data plays a central role in our daily lives. An ever increasing number of data sources, driven by individuals and organizations, contribute to this data deluge by sharing their data with others. However, data is locked up behind proprietary, unreliable, and even unstable programming interfaces that prevent us from optimally making use of it.</p><p>Linked Data has the potential to revolutionize the way we discover, access, integrate, and use data; just in the way the World Wide Web has revolutionized the way we consume and connect documents.</p><p>This course will introduce you to the basic principles and technologies of Linked Data to enable data sharing and reuse on a massive scale. Held together by ontologies, i.e. knowledge representations based on Semantic Web technologies, Linked Data serves as the central building block of the emerging Web of Data.</p>]]></description>
                <author><![CDATA[Universität Potsdam - Hasso-Plattner-Institut <hpi-info@hpi.de>]]></author>
                <pubDate>Fri, 06 Aug 2021 19:22:23 +0200</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Social Media - What No One has Told You about Privacy?]]></title>
                <link>https://www.beyond-eve.com/en/events/social-media-what-no-one-has-told-you-about-privacy</link>
                <description><![CDATA[In this 2 week workshop we discuss the emergence of social media, how the concept gained popularity and has now become the axle in collaborative communication on the Internet. We follow this with a presentation of basic approaches that you can use to protect your data and more importantly your privacy on these platforms. Everyone knows the odd feeling of discomfort when having added someone you actually don't know very well to yours friends' list or to the wrong category within your contacts. The participants will learn in this openHPI course that privacy is still a concern also for users who do not actively use the Internet.]]></description>
                <author><![CDATA[Universität Potsdam - Hasso-Plattner-Institut <hpi-info@hpi.de>]]></author>
                <pubDate>Sat, 05 Dec 2020 22:13:19 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Shoshana Zuboff: Surveillance Capitalism and Democracy]]></title>
                <link>https://www.beyond-eve.com/en/events/shoshana-zuboff-surveillance-capitalism-and-democracy</link>
                <description><![CDATA[<p>The collection and analysis of data is changing the way economies operate. Are these changes so fundamental that they can be said to have led to the emergence of a new form of capitalism – surveillance capitalism? If people’s behaviour is made increasingly transparent, do we become a society in which trust is no longer necessary? Are individuals a mere appendage to the digital machine, objects of new mechanisms which reward and punish according to the determinations of private capital? How is social cohesion affected when people become dispensable as a labour force, while their data continues to provide function as a source of value in lucrative new markets that trade in predictions of human behaviour? How should we understand the new quality of power that arises from these unprecedented conditions? What kind of society does it aim to create? And what ramifications will these developments have for the principles of liberal democracy? Will privacy law and anti-trust law be enough? How can we tame what we do not yet understand?</p><p><br></p><p><strong>Shoshana Zuboff</strong> is a social scientist and author of three books, each of which has been recognised as the definitive signal of a new epoch in technological society. Her latest book, The Age of Surveillance Capitalism reveals a world in which technology users are no longer customers but the raw material for an entirely new economic system. Zuboff is the Charles Edward Wilson Professor Emerita at Harvard Business School and was a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard Law School from 2014 until 2016.</p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Fri, 18 Dec 2020 13:16:59 +0100</pubDate>
                            </item>
            </channel>
</rss>
