<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <atom:link href="https://www.beyond-eve.com/technialarticles/rss" rel="self" type="application/rss+xml" />
        <title><![CDATA[Beyond EVE: Events]]></title>
        <link><![CDATA[https://www.beyond-eve.com/technialarticles/rss]]></link>
        <description><![CDATA[]]></description>
        <language>de-DE</language>
        <pubDate>Sun, 26 Dec 2021 12:54:23 +0100</pubDate>

                    <item>
                <title><![CDATA[How to identify bias in Natural Language Processing]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/how-to-identify-bias-in-natural-language-processing</link>
                <description><![CDATA[<p><strong>Why do translation programs or chatbots on our computers often contain discriminatory tendencies towards gender or race? Here is an easy guide to understand how bias in natural language processing works. We explain why sexist technologies like search engines are not just an unfortunate coincidence.</strong></p><p><strong><span class="ql-cursor">﻿</span></strong></p><h3><strong>What is bias in translation programs?</strong></h3><p>Have you ever used machine translation for translating a sentence to Estonian? In some languages, like Estonian, pronouns, and nouns do not indicate gender. When translating to English, the software has to make a choice. Which word becomes male and which female? However, often it is a choice grounded in stereotypes. Is this just a coincidence?</p><p><br></p><p>Authors:</p><p><a href="https://www.hiig.de/freya-hewett/" rel="noopener noreferrer" target="_blank">Freya Hewett</a> Wissenschaftliche Mitarbeiterin: AI &amp; Society Lab</p><p><a href="https://www.hiig.de/sami-nenno/" rel="noopener noreferrer" target="_blank">Sami Nenno</a> Studentischer Mitarbeiter: AI &amp; Society Lab</p>]]></description>
                <author><![CDATA[Alexander von Humboldt Institute for Internet and Society (HIIG)  <info@hiig.de>]]></author>
                <pubDate>Sun, 26 Dec 2021 12:54:23 +0100</pubDate>
                            </item>
                    <item>
                <title><![CDATA[Defective computing: How algorithms use speech analysis to profile job candidates]]></title>
                <link>https://www.beyond-eve.com/en/technicalarticles/defective-computing-how-algorithms-use-speech-analysis-to-profile-job-candidates</link>
                <description><![CDATA[<p><strong>Some companies and scientists present Affective Computing, the algorithmic analysis of personality traits also known as “artificial emotional intelligence”, as an important new development. But the methods that are used are often dubious and present serious risks for discrimination.</strong></p><p><strong>It was announced with some fanfare that Alexa and others would soon demonstrate breakthroughs in the field of emotion analysis. Much is written about affective computing, but products are far from market ready. For example, Amazon’s emotion assistant </strong><a href="https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions" rel="noopener noreferrer" target="_blank">Dylan</a> is said to be able to read human emotions just by listening to their voices. However, Dylan currently only exists in form of a patent.</p><p>So far, Amazon, Google et al. have not launched such products. Identifying unique signals that indicate that someone is sad seems to be a bit more complicated than they initially thought. Maybe someone’s voice sounds depressed because they are depressed, but maybe they are just tired or exhausted.</p><p>However, these difficulties do not prevent other companies from launching products that claim to have solved these complex problems by using voice and speech for character and personality analysis.</p><p>In Germany, two examples spring to mind. One is the company Precire, based in Aachen, a city on border with Belgium. Their idea: you record a voice sample, and based on the person’s choice of words, sentence structure and many other indicators, the software then produces an analysis of their character traits. The software can be used in staff recruitment or to identify candidates for promotion.</p><p>The company states that its software carries out the analysis based on a 15-minute language sample. The then CEO Mario Reis stated in an <a href="https://blog.recrutainment.de/2016/05/11/persoenlichkeitsprofil-aus-der-analyse-von-sprache-einfach-nur-creepy-oder-die-technologie-von-morgen-interview-mit-mario-reis-von-psyware-und-britta-nollmann-von-randstad/" rel="noopener noreferrer" target="_blank">interview</a> in 2016 that the results were based on science and scientifically tested. This statement is repeated in <a href="https://www.springer.com/de/book/9783658187705" rel="noopener noreferrer" target="_blank">a book</a> published in 2018. This book also cites additional studies and findings to further support the scientific grounding of the method.</p><p><em>By Veronika Thiel</em></p>]]></description>
                <author><![CDATA[AlgorithmWatch gGmbH <info@algorithmwatch.org>]]></author>
                <pubDate>Sat, 12 Dec 2020 17:26:38 +0100</pubDate>
                            </item>
            </channel>
</rss>
