<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <atom:link href="https://feeds.simplecast.com/eL5Oo9jN" rel="self" title="MP3 Audio" type="application/atom+xml"/>
    <atom:link href="https://simplecast.superfeedr.com" rel="hub" xmlns="http://www.w3.org/2005/Atom"/>
    <generator>https://simplecast.com</generator>
    <title>Data &amp; Society</title>
    <description>Presenting timely conversations about the purpose and power of technology that bridge our interdisciplinary research with broader public conversations about the societal implications of data and automation.

For more information, visit datasociety.net.</description>
    <language>en</language>
    <pubDate>Fri, 20 Mar 2026 15:00:00 +0000</pubDate>
    <lastBuildDate>Fri, 20 Mar 2026 15:00:13 +0000</lastBuildDate>
    
    <link>https://listen.datasociety.net</link>
    <itunes:type>episodic</itunes:type>
    <itunes:summary>Presenting timely conversations about the purpose and power of technology that bridge our interdisciplinary research with broader public conversations about the societal implications of data and automation.

For more information, visit datasociety.net.</itunes:summary>
    <itunes:author>Data &amp; Society</itunes:author>
    <itunes:explicit>false</itunes:explicit>
    <itunes:image href="https://image.simplecastcdn.com/images/3deecd15-b5fd-4134-9316-27de084c9d3e/faff21a4-0864-4c67-9495-9886f9aa8053/3000x3000/data-society-logo-sqaure-white.jpg?aid=rss_feed"/>
    <itunes:new-feed-url>https://feeds.simplecast.com/eL5Oo9jN</itunes:new-feed-url>
    <itunes:owner>
      <itunes:name>Data &amp; Society</itunes:name>
      <itunes:email>events@datasociety.net</itunes:email>
    </itunes:owner>
    <itunes:category text="Society &amp; Culture"/>
    <itunes:category text="Technology"/>
    <itunes:category text="Education"/>
    <item>
      <guid isPermaLink="false">8c2865c2-9b35-432c-a516-7cc53226bc5c</guid>
      <title>The Craft of Science with AI: Evidence, Judgment, and Practice | Public Panel</title>
      <description><![CDATA[<p>As AI is integrated into scientific practice, the practice of science itself is changing. AI models that summarize, categorize, simulate, and predict not only stand to accelerate scientific research; they now sit inside these practices, alternately enhancing and eroding craft while shifting how questions are posed, what counts as evidence, how tacit judgment is taught and exercised, and reshaping trust in results.</p>
<p> </p>
<p><a href="https://www.hhmi.org/scientists/kristin-m-branson" target="_blank" rel="noopener noreferrer">Dr. Kristin M. Branson</a> (<a href="https://bsky.app/profile/kristinmbranson.bsky.social" target="_blank" rel="noopener noreferrer">@kristinmbranson.bsky.social</a>) is a senior group leader at the Howard Hughes Medical Institute’s (HHMI) Janelia Research Campus in Ashborn, Virginia.</p>
<p><a href="https://anthropology.yale.edu/profile/lisa-messeri" target="_blank" rel="noopener noreferrer">Dr. Lisa Messeri</a> (<a href="https://bsky.app/profile/lmesseri.bsky.social" target="_blank" rel="noopener noreferrer">@lmesseri.bsky.social</a>) is an associate professor of sociocultural anthropology at Yale University. </p>
<p><a href="https://history.wisc.edu/people/nelson-nicole-c/" target="_blank" rel="noopener noreferrer">Dr. Nicole C. Nelson</a> (<a href="https://bsky.app/profile/nicolecnelson.bsky.social" target="_blank" rel="noopener noreferrer">@nicolecnelson.bsky.social</a>) is an associate professor in the Department of Medical History and Bioethics at the University of Wisconsin–Madison.</p>
]]></description>
      <pubDate>Fri, 20 Mar 2026 15:00:00 +0000</pubDate>
      <author>events@datasociety.net (Kristin Branson, Lisa Messeri, Nicole Nelson, Ranjit Singh)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>As AI is integrated into scientific practice, the practice of science itself is changing. AI models that summarize, categorize, simulate, and predict not only stand to accelerate scientific research; they now sit inside these practices, alternately enhancing and eroding craft while shifting how questions are posed, what counts as evidence, how tacit judgment is taught and exercised, and reshaping trust in results.</p>
<p> </p>
<p><a href="https://www.hhmi.org/scientists/kristin-m-branson" target="_blank" rel="noopener noreferrer">Dr. Kristin M. Branson</a> (<a href="https://bsky.app/profile/kristinmbranson.bsky.social" target="_blank" rel="noopener noreferrer">@kristinmbranson.bsky.social</a>) is a senior group leader at the Howard Hughes Medical Institute’s (HHMI) Janelia Research Campus in Ashborn, Virginia.</p>
<p><a href="https://anthropology.yale.edu/profile/lisa-messeri" target="_blank" rel="noopener noreferrer">Dr. Lisa Messeri</a> (<a href="https://bsky.app/profile/lmesseri.bsky.social" target="_blank" rel="noopener noreferrer">@lmesseri.bsky.social</a>) is an associate professor of sociocultural anthropology at Yale University. </p>
<p><a href="https://history.wisc.edu/people/nelson-nicole-c/" target="_blank" rel="noopener noreferrer">Dr. Nicole C. Nelson</a> (<a href="https://bsky.app/profile/nicolecnelson.bsky.social" target="_blank" rel="noopener noreferrer">@nicolecnelson.bsky.social</a>) is an associate professor in the Department of Medical History and Bioethics at the University of Wisconsin–Madison.</p>
]]></content:encoded>
      <enclosure length="57913765" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/53d742a2-f5b0-4bdb-b8d3-d0b89a0af42f/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/audio/group/357c8ae3-04f4-4094-a9a8-50c04454c1c3/group-item/fb8b91c2-89c9-41e4-990e-222c2443be7f/128_default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>The Craft of Science with AI: Evidence, Judgment, and Practice | Public Panel</itunes:title>
      <itunes:author>Kristin Branson, Lisa Messeri, Nicole Nelson, Ranjit Singh</itunes:author>
      <itunes:duration>01:00:10</itunes:duration>
      <itunes:summary>On March 19, 2026 in a conversation moderated by D&amp;S AI on the Ground program director Ranjit Singh, Kristin M. Branson, Lisa Messeri, and Nicole C. Nelson discussed the impact of machine learning tools on the nature of proof, inference, uncertainty, and error in scientific workflows today.</itunes:summary>
      <itunes:subtitle>On March 19, 2026 in a conversation moderated by D&amp;S AI on the Ground program director Ranjit Singh, Kristin M. Branson, Lisa Messeri, and Nicole C. Nelson discussed the impact of machine learning tools on the nature of proof, inference, uncertainty, and error in scientific workflows today.</itunes:subtitle>
      <itunes:keywords>quantitative research, craft of science, qualitative research, artificial intelligence, ai, evidence</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>138</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b6701dec-1ffd-4893-8068-075ddae77f99</guid>
      <title>Mental Health, Chatbots, and the Future of Care  | Databite No. 165</title>
      <description><![CDATA[<p>While many people have found benefit and respite in using chatbots for companionship, mental health, and emotional support, the widespread adoption of these tools has also resulted in harm and raised deep concerns about identity and safety. How are chatbots shaping people’s understanding of themselves? What concerns do therapists have about their use? How might these tools be designed and implemented to prioritize users’ wellbeing? What kinds of guardrails, regulations, and safety protocols might be effective?</p>
<p>In connection with Data & Society’s ongoing research on <a href="https://datasociety.net/points/what-happens-when-people-turn-to-chatbots-for-therapy/" target="_blank" rel="noopener noreferrer">mental health and chatbots</a>, on February 26 we explored these questions and more in a conversation moderated by researchers Livia Garofalo and Briana Vecchione. Together with Luca Belli, AI safety lead at Spring Health; Miranda Bogen, founding director of the AI Governance Lab at the Center for Democracy & Technology; and psychiatrist and psychotherapist Marlynn Wei, they discussed the profound shifts in how people seek help and support, and how mental health professionals, policymakers, and tech designers are navigating these shifts now.</p>
<p><a href="https://datasociety.net/events/mental-health-chatbots-and-the-future-of-care/" target="_blank" rel="noopener noreferrer">Learn more about the event and Data & Society’s research on mental health chatbot interventions</a>.</p>
]]></description>
      <pubDate>Tue, 3 Mar 2026 16:16:17 +0000</pubDate>
      <author>events@datasociety.net (Briana Vecchione, Livia Garofalo, Luca Belli, Marlynn Wei, Miranda Bogen)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>While many people have found benefit and respite in using chatbots for companionship, mental health, and emotional support, the widespread adoption of these tools has also resulted in harm and raised deep concerns about identity and safety. How are chatbots shaping people’s understanding of themselves? What concerns do therapists have about their use? How might these tools be designed and implemented to prioritize users’ wellbeing? What kinds of guardrails, regulations, and safety protocols might be effective?</p>
<p>In connection with Data & Society’s ongoing research on <a href="https://datasociety.net/points/what-happens-when-people-turn-to-chatbots-for-therapy/" target="_blank" rel="noopener noreferrer">mental health and chatbots</a>, on February 26 we explored these questions and more in a conversation moderated by researchers Livia Garofalo and Briana Vecchione. Together with Luca Belli, AI safety lead at Spring Health; Miranda Bogen, founding director of the AI Governance Lab at the Center for Democracy & Technology; and psychiatrist and psychotherapist Marlynn Wei, they discussed the profound shifts in how people seek help and support, and how mental health professionals, policymakers, and tech designers are navigating these shifts now.</p>
<p><a href="https://datasociety.net/events/mental-health-chatbots-and-the-future-of-care/" target="_blank" rel="noopener noreferrer">Learn more about the event and Data & Society’s research on mental health chatbot interventions</a>.</p>
]]></content:encoded>
      <enclosure length="56635875" type="audio/mpeg" url="https://cdn.simplecast.com/media/audio/transcoded/53d742a2-f5b0-4bdb-b8d3-d0b89a0af42f/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/audio/group/7d92e675-d594-4ca7-b12f-e4b88b945c40/group-item/c62328c7-c13b-4199-86c7-cd0a87f7b2f2/128_default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Mental Health, Chatbots, and the Future of Care  | Databite No. 165</itunes:title>
      <itunes:author>Briana Vecchione, Livia Garofalo, Luca Belli, Marlynn Wei, Miranda Bogen</itunes:author>
      <itunes:duration>00:58:51</itunes:duration>
      <itunes:summary>In a conversation moderated by researchers Livia Garofalo and Briana Vecchione, Luca Belli, AI safety lead at Spring Health; Miranda Bogen, founding director of the AI Governance Lab at the Center for Democracy &amp; Technology; and psychiatrist and psychotherapist Marlynn Wei, discussed the profound shifts in how people seek help and support, and how mental health professionals, policymakers, and tech designers are navigating these shifts now.</itunes:summary>
      <itunes:subtitle>In a conversation moderated by researchers Livia Garofalo and Briana Vecchione, Luca Belli, AI safety lead at Spring Health; Miranda Bogen, founding director of the AI Governance Lab at the Center for Democracy &amp; Technology; and psychiatrist and psychotherapist Marlynn Wei, discussed the profound shifts in how people seek help and support, and how mental health professionals, policymakers, and tech designers are navigating these shifts now.</itunes:subtitle>
      <itunes:keywords>chatbots, mental health, mental health care, counselor, ai, chatgpt, 988, chatbots and mental health</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>137</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6ebd7f1d-1e88-443d-84dc-3e952a45fe68</guid>
      <title>(404) Job Not Found: The AI Literacy Trap at Work | Databite No. 164</title>
      <description><![CDATA[<p>In her new report <a href="https://datasociety.net/wp-content/uploads/2026/02/404-job-not-found-layout-30012026-final.pdf" target="_blank"><i>(404) Job Not Found: What Workforce Training Can’t Fix for Black Atlantans in the Age of AI</i></a>, Data & Society researcher Anuli Akanegbu provides the first ethnographic examination of how AI-related skills are defined, taught, and valued across Atlanta’s growing tech economy. Drawing on interviews, field observations, and historical analysis, she traces how AI literacy is promoted by industry, implemented by government, and interpreted by workers and community leaders navigating an increasingly AI-driven workforce infrastructure.</p><p>On February 17, Akanegbu, TechEquity Senior Vice President of Labor Programs Tim Newman, and Bard Computer Science Professor Annabel Rothschild held a critical conversation on the policy stakes of AI-focused workforce development at the state and national level. This conversation was  Informed by Akanegbu’s report and an accompanying policy brief co-authored by D&S Policy Manager Serena Oduro, who moderated this conversation, panelists discussed how government and industry priorities shape workers’ access to opportunity and how policy can address the real-world impacts of automation and AI on workers.</p><p> </p><p><a href="https://datasociety.net/events/404-job-not-found/" target="_blank">Learn more about the event</a></p><p><a href="https://datasociety.net/library/404-job-not-found/" target="_blank">Read Anuli's report</a></p><p><a href="https://datasociety.net/points/building-civic-strength-for-an-ai-era/" target="_blank">Learn about Data & Society's 'AI Civics' Initiative</a></p>
]]></description>
      <pubDate>Fri, 20 Feb 2026 21:51:58 +0000</pubDate>
      <author>events@datasociety.net (Anuli Akanegbu, Tim Newman, Annabel Rothschild)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In her new report <a href="https://datasociety.net/wp-content/uploads/2026/02/404-job-not-found-layout-30012026-final.pdf" target="_blank"><i>(404) Job Not Found: What Workforce Training Can’t Fix for Black Atlantans in the Age of AI</i></a>, Data & Society researcher Anuli Akanegbu provides the first ethnographic examination of how AI-related skills are defined, taught, and valued across Atlanta’s growing tech economy. Drawing on interviews, field observations, and historical analysis, she traces how AI literacy is promoted by industry, implemented by government, and interpreted by workers and community leaders navigating an increasingly AI-driven workforce infrastructure.</p><p>On February 17, Akanegbu, TechEquity Senior Vice President of Labor Programs Tim Newman, and Bard Computer Science Professor Annabel Rothschild held a critical conversation on the policy stakes of AI-focused workforce development at the state and national level. This conversation was  Informed by Akanegbu’s report and an accompanying policy brief co-authored by D&S Policy Manager Serena Oduro, who moderated this conversation, panelists discussed how government and industry priorities shape workers’ access to opportunity and how policy can address the real-world impacts of automation and AI on workers.</p><p> </p><p><a href="https://datasociety.net/events/404-job-not-found/" target="_blank">Learn more about the event</a></p><p><a href="https://datasociety.net/library/404-job-not-found/" target="_blank">Read Anuli's report</a></p><p><a href="https://datasociety.net/points/building-civic-strength-for-an-ai-era/" target="_blank">Learn about Data & Society's 'AI Civics' Initiative</a></p>
]]></content:encoded>
      <enclosure length="58223305" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/51d080b8-b767-4334-b864-da74c3fd4ef6/audio/7bcc54e0-436d-4647-80ab-367733c458a3/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>(404) Job Not Found: The AI Literacy Trap at Work | Databite No. 164</itunes:title>
      <itunes:author>Anuli Akanegbu, Tim Newman, Annabel Rothschild</itunes:author>
      <itunes:duration>01:00:30</itunes:duration>
      <itunes:summary>On February 17, Akanegbu, TechEquity Senior Vice President of Labor Programs Tim Newman, and Bard Computer Science Professor Annabel Rothschild held a critical conversation on the policy stakes of AI-focused workforce development at the state and national level. This conversation was moderated by D&amp;S Policy Manager Serena Oduro.</itunes:summary>
      <itunes:subtitle>On February 17, Akanegbu, TechEquity Senior Vice President of Labor Programs Tim Newman, and Bard Computer Science Professor Annabel Rothschild held a critical conversation on the policy stakes of AI-focused workforce development at the state and national level. This conversation was moderated by D&amp;S Policy Manager Serena Oduro.</itunes:subtitle>
      <itunes:keywords>digital skilling, reskilling, labor, ai literacy, atlanta, ai, atlanta georgia, ai &amp; labor</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>136</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">10e4bb00-710c-44d4-9e4b-9e4e15c40f25</guid>
      <title>One Year Later: What We’ve Learned About Trump’s AI Agenda | Databite No. 163</title>
      <description><![CDATA[<p>The second Trump administration has launched a full-scale effort to achieve “unchallenged global technological dominance.” It is accelerating the construction of AI infrastructure, from opening up federal lands to ramping up energy production. It has invoked AI-enabled “efficiency” in order to replace federal workers, removed agency guidance on algorithmic discrimination,  and supercharged the use of AI in areas including defense and immigration enforcement. The administration has also pursued novel public ownership efforts, such as taking equity  in Intel and critical minerals firms. To what end? Officials say they are now maximizing the “export of the American AI technology stack.” This is not the deregulatory tech agenda predicted by both supporters and critics of President Trump. So what is it?</p><p>How should we understand the administration’s actions when it comes to AI? What dynamics are driving these changes in AI policymaking? What might be the downstream consequences for Americans? And how should we respond?</p>
]]></description>
      <pubDate>Sun, 25 Jan 2026 16:57:11 +0000</pubDate>
      <author>events@datasociety.net (Vittoria Elliott, Edward Ongweso Jr., Brian Chen, Alondra Nelson)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>The second Trump administration has launched a full-scale effort to achieve “unchallenged global technological dominance.” It is accelerating the construction of AI infrastructure, from opening up federal lands to ramping up energy production. It has invoked AI-enabled “efficiency” in order to replace federal workers, removed agency guidance on algorithmic discrimination,  and supercharged the use of AI in areas including defense and immigration enforcement. The administration has also pursued novel public ownership efforts, such as taking equity  in Intel and critical minerals firms. To what end? Officials say they are now maximizing the “export of the American AI technology stack.” This is not the deregulatory tech agenda predicted by both supporters and critics of President Trump. So what is it?</p><p>How should we understand the administration’s actions when it comes to AI? What dynamics are driving these changes in AI policymaking? What might be the downstream consequences for Americans? And how should we respond?</p>
]]></content:encoded>
      <enclosure length="63188402" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/c8037ec4-66b4-495d-b6f5-1c7f652b03f0/audio/98617d86-bd57-41f2-9563-72eebb45b4dc/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>One Year Later: What We’ve Learned About Trump’s AI Agenda | Databite No. 163</itunes:title>
      <itunes:author>Vittoria Elliott, Edward Ongweso Jr., Brian Chen, Alondra Nelson</itunes:author>
      <itunes:duration>01:05:40</itunes:duration>
      <itunes:summary>After a turbulent first year, this discussion featured experts — academic and policy advisor Alondra Nelson, independent writer and editor Edward Ongweso Jr., and WIRED reporter Vittoria Elliott — who have been closely monitoring and making sense of the Trump administration’s policies on AI and digital technologies. This conversation was moderated by Data &amp; Society policy director Brian Chen.
</itunes:summary>
      <itunes:subtitle>After a turbulent first year, this discussion featured experts — academic and policy advisor Alondra Nelson, independent writer and editor Edward Ongweso Jr., and WIRED reporter Vittoria Elliott — who have been closely monitoring and making sense of the Trump administration’s policies on AI and digital technologies. This conversation was moderated by Data &amp; Society policy director Brian Chen.
</itunes:subtitle>
      <itunes:keywords>trump administration, ai policymaking, ai policy, ai and government, policy, us government</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>135</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">5a73ecf8-4207-4b9b-a4b7-25e6e9ba7006</guid>
      <title>Standing Up for Human Value in the AI Economy | &apos;Understanding AI&apos; — NYPL x D&amp;S Event Series [4]</title>
      <description><![CDATA[<p>Generative AI models are marketed as the next revolution in workplace automation, but they ultimately rely on human labor — from the people labeling content and checking outputs, to the content creators and workers whose data are extracted to build the systems. As management and organizational leaders adopt AI across workplaces, the use of these systems raises questions about how companies are reshaping the quality of work, job security, and the value of  human labor. How are workers’ lives impacted when AI is used to monitor performance, surveil output, or make intrusive management decisions? Will AI disrupt industries and business models? How can we make sure technology supports workers, rather than undermining them?</p><p> </p><p><strong>About 'Understanding AI'</strong></p><p>In the fall of 2025, <a href="https://www.nypl.org/" target="_blank">The New York Public Library</a> and <a href="https://datasociety.net/" target="_blank">Data & Society</a> collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the <a href="https://www.nypl.org/locations/snfl" target="_blank">Stavros Niarchos Foundation Library (SNFL)</a>in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.</p><p><a href="https://datasociety.net/events/understanding-ai/" target="_blank">Revisit the series</a></p>
]]></description>
      <pubDate>Fri, 16 Jan 2026 15:39:26 +0000</pubDate>
      <author>events@datasociety.net (Nic Dawes, Aiha Nguyen, Amba Kak, Michelle Miller, Sam Wheeler)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Generative AI models are marketed as the next revolution in workplace automation, but they ultimately rely on human labor — from the people labeling content and checking outputs, to the content creators and workers whose data are extracted to build the systems. As management and organizational leaders adopt AI across workplaces, the use of these systems raises questions about how companies are reshaping the quality of work, job security, and the value of  human labor. How are workers’ lives impacted when AI is used to monitor performance, surveil output, or make intrusive management decisions? Will AI disrupt industries and business models? How can we make sure technology supports workers, rather than undermining them?</p><p> </p><p><strong>About 'Understanding AI'</strong></p><p>In the fall of 2025, <a href="https://www.nypl.org/" target="_blank">The New York Public Library</a> and <a href="https://datasociety.net/" target="_blank">Data & Society</a> collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the <a href="https://www.nypl.org/locations/snfl" target="_blank">Stavros Niarchos Foundation Library (SNFL)</a>in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.</p><p><a href="https://datasociety.net/events/understanding-ai/" target="_blank">Revisit the series</a></p>
]]></content:encoded>
      <enclosure length="66596940" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/fb5327ba-4c7f-4223-9d59-28c90afeda03/audio/1563f929-bc91-48d0-a899-f2015ceb3d5d/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Standing Up for Human Value in the AI Economy | &apos;Understanding AI&apos; — NYPL x D&amp;S Event Series [4]</itunes:title>
      <itunes:author>Nic Dawes, Aiha Nguyen, Amba Kak, Michelle Miller, Sam Wheeler</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/83a36998-f4d2-4f6c-8a5a-dd9fae36d5dd/3000x3000/4-thumbnail-human-value.jpg?aid=rss_feed"/>
      <itunes:duration>01:08:59</itunes:duration>
      <itunes:summary>In this closing session of &apos;Understanding AI&apos;, The City Executive Director and journalist Nic Dawes discussed with D&amp;S Labor Futures Program Director Aiha Nguyen, AI Now Institute Co-Executive Director Amba Kak, Michelle Miller, Director of Innovation at the Center for Labor and a Just Economy, and Writers Guild of America East Executive Director Sam Wheeler about the value of human labor in increasingly automated workplaces.</itunes:summary>
      <itunes:subtitle>In this closing session of &apos;Understanding AI&apos;, The City Executive Director and journalist Nic Dawes discussed with D&amp;S Labor Futures Program Director Aiha Nguyen, AI Now Institute Co-Executive Director Amba Kak, Michelle Miller, Director of Innovation at the Center for Labor and a Just Economy, and Writers Guild of America East Executive Director Sam Wheeler about the value of human labor in increasingly automated workplaces.</itunes:subtitle>
      <itunes:keywords>labor futures, new york public library, ai, nypl</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>134</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">fd0c2eee-2407-4d47-81b5-fde87ead1477</guid>
      <title>Reorienting AI for the Public Interest | &apos;Understanding AI&apos; — NYPL x D&amp;S Event Series [3]</title>
      <description><![CDATA[<p>The concentration of power and lack of regulation in the technology industry directly shapes how AI is designed and deployed, and whose interests it serves. That means decisions about these tools often reflect corporate priorities over public benefits. While AI is often held up as a tool to increase “efficiency,” it is essential to ask: efficiency for whom, and at what cost? What would it mean to create and oversee AI in the public’s best interest? How could these technologies be made more accountable to the people and communities they affect? And what is needed to create a future where AI works for everyone?</p><p> </p><p><strong>About 'Understanding AI'</strong></p><p>In the fall of 2025, <a href="https://www.nypl.org/" target="_blank">The New York Public Library</a> and <a href="https://datasociety.net/" target="_blank">Data & Society</a> collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the <a href="https://www.nypl.org/locations/snfl" target="_blank">Stavros Niarchos Foundation Library (SNFL)</a>in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.</p><p><a href="https://datasociety.net/events/understanding-ai/" target="_blank">Revisit the series</a></p>
]]></description>
      <pubDate>Fri, 16 Jan 2026 15:33:55 +0000</pubDate>
      <author>events@datasociety.net (Janet Haven, Catherine Bracy, Charlton McIlwain, Julia Angwin)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>The concentration of power and lack of regulation in the technology industry directly shapes how AI is designed and deployed, and whose interests it serves. That means decisions about these tools often reflect corporate priorities over public benefits. While AI is often held up as a tool to increase “efficiency,” it is essential to ask: efficiency for whom, and at what cost? What would it mean to create and oversee AI in the public’s best interest? How could these technologies be made more accountable to the people and communities they affect? And what is needed to create a future where AI works for everyone?</p><p> </p><p><strong>About 'Understanding AI'</strong></p><p>In the fall of 2025, <a href="https://www.nypl.org/" target="_blank">The New York Public Library</a> and <a href="https://datasociety.net/" target="_blank">Data & Society</a> collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the <a href="https://www.nypl.org/locations/snfl" target="_blank">Stavros Niarchos Foundation Library (SNFL)</a>in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.</p><p><a href="https://datasociety.net/events/understanding-ai/" target="_blank">Revisit the series</a></p>
]]></content:encoded>
      <enclosure length="66529107" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/2b2e0f0b-2696-4352-900a-49697fe2fa1e/audio/673e8a8a-bf2b-42bc-93a0-577294e233a5/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Reorienting AI for the Public Interest | &apos;Understanding AI&apos; — NYPL x D&amp;S Event Series [3]</itunes:title>
      <itunes:author>Janet Haven, Catherine Bracy, Charlton McIlwain, Julia Angwin</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/801751ab-6a6f-4eee-b8cd-f38a554d39e6/3000x3000/3-thumbnail-public-interest.jpg?aid=rss_feed"/>
      <itunes:duration>01:08:55</itunes:duration>
      <itunes:summary>For the third event of the &apos;Understanding AI&apos; series, Data &amp; Society Executive Director Janet Haven hosted a critical discussion about AI accountability and the public interest, featuring professor Charlton McIlwain, journalist Julia Angwin, and civic technologist Catherine Bracy. </itunes:summary>
      <itunes:subtitle>For the third event of the &apos;Understanding AI&apos; series, Data &amp; Society Executive Director Janet Haven hosted a critical discussion about AI accountability and the public interest, featuring professor Charlton McIlwain, journalist Julia Angwin, and civic technologist Catherine Bracy. </itunes:subtitle>
      <itunes:keywords>new york public library, ai, nypl, public</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>133</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6e2e3537-5b9b-49b2-9449-cddb6fff2bd7</guid>
      <title>The Environmental Costs of AI Are Surging – What Now? | &apos;Understanding AI&apos; — NYPL x D&amp;S Event Series [2]</title>
      <description><![CDATA[<p>Artificial intelligence technologies run on powerful computers that require vast amounts of energy, water, and critical minerals. As AI use grows, so does its environmental footprint. Yet there is little consensus on how to assess and address the technology’s toll on the climate before irreparable damage is done. How can we understand the impact AI data centers have on communities and the environment? How can we ensure that communities are able to use empirical data about those impacts to fight back?</p><p> </p><p><strong>About 'Understanding AI'</strong></p><p>In the fall of 2025, <a href="https://www.nypl.org/" target="_blank">The New York Public Library</a> and <a href="https://datasociety.net/" target="_blank">Data & Society</a> collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the <a href="https://www.nypl.org/locations/snfl" target="_blank">Stavros Niarchos Foundation Library (SNFL)</a>in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.</p><p><a href="https://datasociety.net/events/understanding-ai/" target="_blank">Revisit the series</a></p><p> </p>
]]></description>
      <pubDate>Fri, 16 Jan 2026 15:28:30 +0000</pubDate>
      <author>events@datasociety.net (Tamara Kneese, Jasmine McNealy, Sanjana Paul)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Artificial intelligence technologies run on powerful computers that require vast amounts of energy, water, and critical minerals. As AI use grows, so does its environmental footprint. Yet there is little consensus on how to assess and address the technology’s toll on the climate before irreparable damage is done. How can we understand the impact AI data centers have on communities and the environment? How can we ensure that communities are able to use empirical data about those impacts to fight back?</p><p> </p><p><strong>About 'Understanding AI'</strong></p><p>In the fall of 2025, <a href="https://www.nypl.org/" target="_blank">The New York Public Library</a> and <a href="https://datasociety.net/" target="_blank">Data & Society</a> collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the <a href="https://www.nypl.org/locations/snfl" target="_blank">Stavros Niarchos Foundation Library (SNFL)</a>in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.</p><p><a href="https://datasociety.net/events/understanding-ai/" target="_blank">Revisit the series</a></p><p> </p>
]]></content:encoded>
      <enclosure length="55912833" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/c11720d2-1e72-4968-9e6d-dd41e65b86a8/audio/0fccf791-8b4b-447e-9930-4c9ae8edf209/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>The Environmental Costs of AI Are Surging – What Now? | &apos;Understanding AI&apos; — NYPL x D&amp;S Event Series [2]</itunes:title>
      <itunes:author>Tamara Kneese, Jasmine McNealy, Sanjana Paul</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/d1d80cb7-fadf-4edc-a143-9f2a76799c05/3000x3000/1-thumbnail-understanding-ai.jpg?aid=rss_feed"/>
      <itunes:duration>00:57:51</itunes:duration>
      <itunes:summary>For the second installment of the &apos;Understanding AI&apos; series, researcher Tamara Kneese spoke with environmental justice researcher Sanjana Paul and critical social scientist Jasmine McNealy about the environmental toll AI development is taking on local water supplies, energy systems, and communities around the world.</itunes:summary>
      <itunes:subtitle>For the second installment of the &apos;Understanding AI&apos; series, researcher Tamara Kneese spoke with environmental justice researcher Sanjana Paul and critical social scientist Jasmine McNealy about the environmental toll AI development is taking on local water supplies, energy systems, and communities around the world.</itunes:subtitle>
      <itunes:keywords>data centers, new york public library, ai, environment, nypl</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>132</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ecf3cb2f-c7c9-4754-bc6d-04839cb96491</guid>
      <title>Understanding AI: What the Public Needs to Know | &apos;Understanding AI&apos; — NYPL x D&amp;S Event Series [1]</title>
      <description><![CDATA[<p>Artificial intelligence (AI) is reshaping many aspects of our daily lives: from the way people are hired for jobs, to how housing applications are reviewed, to how government services are delivered in healthcare, education, and beyond. But while organizations of all kinds have been introducing AI systems into their core functions, there is uncertainty about how they are working — including who is on the receiving end of their benefits and harms. What do we need to know about AI and automated decision-making tools today? How can we better understand the technology’s influence, and make informed decisions about where and how to use it?</p><p> </p><p><strong>About 'Understanding AI'</strong></p><p>In the fall of 2025, <a href="https://www.nypl.org/" target="_blank">The New York Public Library</a> and <a href="https://datasociety.net/" target="_blank">Data & Society</a> collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the <a href="https://www.nypl.org/locations/snfl" target="_blank">Stavros Niarchos Foundation Library (SNFL)</a>in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.</p><p><a href="https://datasociety.net/events/understanding-ai/" target="_blank">Revisit the series</a></p>
]]></description>
      <pubDate>Fri, 16 Jan 2026 15:27:06 +0000</pubDate>
      <author>events@datasociety.net (Alice Marwick, Suresh Venkatasubramanian, Meredith Broussard, Mimi Ọnụọha)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Artificial intelligence (AI) is reshaping many aspects of our daily lives: from the way people are hired for jobs, to how housing applications are reviewed, to how government services are delivered in healthcare, education, and beyond. But while organizations of all kinds have been introducing AI systems into their core functions, there is uncertainty about how they are working — including who is on the receiving end of their benefits and harms. What do we need to know about AI and automated decision-making tools today? How can we better understand the technology’s influence, and make informed decisions about where and how to use it?</p><p> </p><p><strong>About 'Understanding AI'</strong></p><p>In the fall of 2025, <a href="https://www.nypl.org/" target="_blank">The New York Public Library</a> and <a href="https://datasociety.net/" target="_blank">Data & Society</a> collaborated to present “Understanding AI,” a four-part live event series exploring the social implications of artificial intelligence and its impacts on democracy, the environment, and human labor. Featuring key figures in the AI ethics field, these events took place at the <a href="https://www.nypl.org/locations/snfl" target="_blank">Stavros Niarchos Foundation Library (SNFL)</a>in New York City as part of the library’s7 Stories Up program, and are now available for all to watch.</p><p><a href="https://datasociety.net/events/understanding-ai/" target="_blank">Revisit the series</a></p>
]]></content:encoded>
      <enclosure length="57828526" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/846642dc-d703-4dca-8432-ce1cf56fa65b/audio/38c4ad27-0106-40f0-8da5-125ef4a46792/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Understanding AI: What the Public Needs to Know | &apos;Understanding AI&apos; — NYPL x D&amp;S Event Series [1]</itunes:title>
      <itunes:author>Alice Marwick, Suresh Venkatasubramanian, Meredith Broussard, Mimi Ọnụọha</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/3a882692-8c1d-4f44-a410-075656b7a526/3000x3000/2-thumbnail-environmental-costs.jpg?aid=rss_feed"/>
      <itunes:duration>00:59:51</itunes:duration>
      <itunes:summary>In this opening session of the &apos;Understanding AI&apos; series, social scientist Alice E. Marwick hosted a grounding conversation about how AI technologies are reshaping daily life with artist Mimi Ọnụọha, journalist Meredith Broussard, and computer science professor Suresh Venkatasubramanian.</itunes:summary>
      <itunes:subtitle>In this opening session of the &apos;Understanding AI&apos; series, social scientist Alice E. Marwick hosted a grounding conversation about how AI technologies are reshaping daily life with artist Mimi Ọnụọha, journalist Meredith Broussard, and computer science professor Suresh Venkatasubramanian.</itunes:subtitle>
      <itunes:keywords>new york public library, ai, nypl</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>131</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e3628f53-bf38-4ee0-bdc8-a700f02324e7</guid>
      <title>Climate-Conscious Tech Workers: Turning the Tide from Within | Databite 162</title>
      <description><![CDATA[<p>In this moment of AI ascendance and data center accelerationism, there are thousands of tech workers who are concerned about the realities of climate change and see the tech industry’s growing role in it — and who are actively working to create change, develop better tools, and organize for collective action. In her report "Turning the Tide: Climate Action in and Against Tech," Climate, Technology, and Justice Program Director Tamara Kneese examines the ways these workers have attempted to reform the tech industry from within while applying external forms of pressure through policymaking and activism. By engaging in workplace activism and forming broader coalitions with environmental justice organizations, climate conscious tech workers who adhere to the organizer mindset use their insider knowledge to advocate for social change rather than technical tweaks. What does that look like in practice? </p><p>Read <a href="https://datasociety.net/library/turning-the-tide/" target="_blank"><i>Turning the Tide: Climate Action in and Against Tech</i></a></p><p><a href="https://datasociety.net/events/climate-conscious-tech-workers-turning-the-tide-from-within/" target="_blank">Learn more about the event and its speakers</a>.</p>
]]></description>
      <pubDate>Fri, 12 Dec 2025 15:49:06 +0000</pubDate>
      <author>events@datasociety.net (Eliza Pan, Khari Johnson, Tamara Kneese)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In this moment of AI ascendance and data center accelerationism, there are thousands of tech workers who are concerned about the realities of climate change and see the tech industry’s growing role in it — and who are actively working to create change, develop better tools, and organize for collective action. In her report "Turning the Tide: Climate Action in and Against Tech," Climate, Technology, and Justice Program Director Tamara Kneese examines the ways these workers have attempted to reform the tech industry from within while applying external forms of pressure through policymaking and activism. By engaging in workplace activism and forming broader coalitions with environmental justice organizations, climate conscious tech workers who adhere to the organizer mindset use their insider knowledge to advocate for social change rather than technical tweaks. What does that look like in practice? </p><p>Read <a href="https://datasociety.net/library/turning-the-tide/" target="_blank"><i>Turning the Tide: Climate Action in and Against Tech</i></a></p><p><a href="https://datasociety.net/events/climate-conscious-tech-workers-turning-the-tide-from-within/" target="_blank">Learn more about the event and its speakers</a>.</p>
]]></content:encoded>
      <enclosure length="61603428" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/84467fe2-2ad3-40b6-a70d-a210321050b9/audio/46dca555-3420-48b5-bd6b-c9098b4b3390/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Climate-Conscious Tech Workers: Turning the Tide from Within | Databite 162</itunes:title>
      <itunes:author>Eliza Pan, Khari Johnson, Tamara Kneese</itunes:author>
      <itunes:duration>01:04:00</itunes:duration>
      <itunes:summary>On December 11, Climate, Technology, and Justice Program Director Kneese discussed the findings of her report with Eliza Pan, co-director of Amazon Employees for Climate Justice, in a conversation moderated by Khari Johnson, technology reporter at CalMatters. Together, they explored the stakes of tech-focused climate work — and how it gets done.</itunes:summary>
      <itunes:subtitle>On December 11, Climate, Technology, and Justice Program Director Kneese discussed the findings of her report with Eliza Pan, co-director of Amazon Employees for Climate Justice, in a conversation moderated by Khari Johnson, technology reporter at CalMatters. Together, they explored the stakes of tech-focused climate work — and how it gets done.</itunes:subtitle>
      <itunes:keywords>tech workers, amazon, climate, tech, climate-conscious</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>130</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c6423528-e0b4-441a-9615-fa219c3e4b88</guid>
      <title>A Roadmap for Rewiring Democracy in the Age of AI | Book Talk</title>
      <description><![CDATA[<p>Democracy faces challenges worldwide, and artificial intelligence has become an increasing part of that. In their book <i>Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship</i>, cybersecurity technologist Bruce Schneier and data scientist Nathan E. Sanders methodically unpack the ways AI is changing every aspect of democracy, while making the case that we can harness the technology to support and strengthen these systems. Neither fear-mongering nor utopian, <i>Rewiring Democracy</i> aims to present a clear-eyed and optimistic path for putting democratic principles at the heart of AI development — highlighting how citizens, public servants, and elected officials can use AI to expand access to justice and inform, empower, and engage the public.</p><p>On October 23, the authors discussed their book with Data & Society’s Director of Research Alice Marwick, walking us through their roadmap for understanding how AI is changing power and participation and what we can do to shape that change for the better.</p>
]]></description>
      <pubDate>Tue, 25 Nov 2025 21:37:46 +0000</pubDate>
      <author>events@datasociety.net (Bruce Schneier, Nathan E. Sanders, Alice Marwick)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Democracy faces challenges worldwide, and artificial intelligence has become an increasing part of that. In their book <i>Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship</i>, cybersecurity technologist Bruce Schneier and data scientist Nathan E. Sanders methodically unpack the ways AI is changing every aspect of democracy, while making the case that we can harness the technology to support and strengthen these systems. Neither fear-mongering nor utopian, <i>Rewiring Democracy</i> aims to present a clear-eyed and optimistic path for putting democratic principles at the heart of AI development — highlighting how citizens, public servants, and elected officials can use AI to expand access to justice and inform, empower, and engage the public.</p><p>On October 23, the authors discussed their book with Data & Society’s Director of Research Alice Marwick, walking us through their roadmap for understanding how AI is changing power and participation and what we can do to shape that change for the better.</p>
]]></content:encoded>
      <enclosure length="57055685" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/118aa784-19cc-4144-bdd0-d7a8f2059078/audio/0da4d507-531c-4552-8f7e-04de226d9cfa/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>A Roadmap for Rewiring Democracy in the Age of AI | Book Talk</itunes:title>
      <itunes:author>Bruce Schneier, Nathan E. Sanders, Alice Marwick</itunes:author>
      <itunes:duration>00:59:18</itunes:duration>
      <itunes:summary>On October 23, in a book talk moderated by Data &amp; Society Director of Research Alice Marwick, cybersecurity technologist Bruce Schneier and data scientist Nathan E. Sanders unpacked their book &apos;Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship.&apos; Together, they discussing the ways AI is changing every aspect of democracy while making the case that we can harness the technology to support and strengthen these systems.</itunes:summary>
      <itunes:subtitle>On October 23, in a book talk moderated by Data &amp; Society Director of Research Alice Marwick, cybersecurity technologist Bruce Schneier and data scientist Nathan E. Sanders unpacked their book &apos;Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship.&apos; Together, they discussing the ways AI is changing every aspect of democracy while making the case that we can harness the technology to support and strengthen these systems.</itunes:subtitle>
      <itunes:keywords>artificial intelligence, democracy, ai, participatory data</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>129</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">17420c66-8ebd-4b51-83ba-6b174546a15a</guid>
      <title>Challenging AI Hype and Tech Industry Power | Book Talk</title>
      <description><![CDATA[<p>Visit <a href="https://datasociety.net/" target="_blank">datasociety.net</a> for to learn more about <a href="https://datasociety.net/events/challenging-ai-hype-and-tech-industry-power/" target="_blank">this Book Talk</a>'s speakers, access resources and referenced materials, and to purchase copies of <i>The AI Con </i>and <i>Empire of AI</i>.</p><p>Purchase copies of these books from <a href="https://bookshop.org/shop/datasociety" target="_blank">our Bookshop</a>:</p><ul><li><a href="https://bookshop.org/a/14284/9780063418561" target="_blank"><i>The AI Con: </i></a><a href="https://bookshop.org/a/14284/9780063418561"><i>How to Fight Big Tech’s Hype and Create the Future We Want</i></a><i> </i>by Emily M. Bender and Alex Hanna</li><li><a href="https://bookshop.org/a/14284/9780593657508" target="_blank"><i>Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI</i></a> by Karen Hao</li></ul>
]]></description>
      <pubDate>Wed, 11 Jun 2025 19:50:44 +0000</pubDate>
      <author>events@datasociety.net (Emily M. Bender, Karen Hao, Alex Hanna, Tamara Kneese)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Visit <a href="https://datasociety.net/" target="_blank">datasociety.net</a> for to learn more about <a href="https://datasociety.net/events/challenging-ai-hype-and-tech-industry-power/" target="_blank">this Book Talk</a>'s speakers, access resources and referenced materials, and to purchase copies of <i>The AI Con </i>and <i>Empire of AI</i>.</p><p>Purchase copies of these books from <a href="https://bookshop.org/shop/datasociety" target="_blank">our Bookshop</a>:</p><ul><li><a href="https://bookshop.org/a/14284/9780063418561" target="_blank"><i>The AI Con: </i></a><a href="https://bookshop.org/a/14284/9780063418561"><i>How to Fight Big Tech’s Hype and Create the Future We Want</i></a><i> </i>by Emily M. Bender and Alex Hanna</li><li><a href="https://bookshop.org/a/14284/9780593657508" target="_blank"><i>Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI</i></a> by Karen Hao</li></ul>
]]></content:encoded>
      <enclosure length="57894678" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/fd03589f-bb40-4881-8f16-ed2d500e1262/audio/8f5f3706-ed72-4724-87ce-182bda0a8845/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Challenging AI Hype and Tech Industry Power | Book Talk</itunes:title>
      <itunes:author>Emily M. Bender, Karen Hao, Alex Hanna, Tamara Kneese</itunes:author>
      <itunes:duration>01:00:10</itunes:duration>
      <itunes:summary>Artificial Intelligence (AI) is not magic or sentient; it does not even describe one coherent set of technologies. In two new books, Emily M. Bender, Alex Hanna, and Karen Hao explore how AI instead serves as a powerful marketing tool for tech giants who have a product to sell and profits to rake in. On June 6 — in a conversation moderated by Tamara Kneese, director of Data &amp; Society’s Climate, Technology, and Justice program — these authors explored AI’s impact on our environment and society, and the motivations of the tech elite that build and shape it.</itunes:summary>
      <itunes:subtitle>Artificial Intelligence (AI) is not magic or sentient; it does not even describe one coherent set of technologies. In two new books, Emily M. Bender, Alex Hanna, and Karen Hao explore how AI instead serves as a powerful marketing tool for tech giants who have a product to sell and profits to rake in. On June 6 — in a conversation moderated by Tamara Kneese, director of Data &amp; Society’s Climate, Technology, and Justice program — these authors explored AI’s impact on our environment and society, and the motivations of the tech elite that build and shape it.</itunes:subtitle>
      <itunes:keywords>the ai con, book talk, empire of ai, ai, imperial ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>128</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">dc7c8c71-68b8-42e0-939d-1f1b89cefca1</guid>
      <title>What is Work Worth? Exploring What Generative AI Means for Workers’ Lives and Labor | Keynote Event</title>
      <description><![CDATA[<p>Recorded on May 6, 2025 at The Greene Space in NYC </p><p>Featuring Dr. Julián Posada and Aiha Nguyen </p><p>Resources and recordings are available here: <a href="https://datasociety.net/events/what-is-work-worth/" target="_blank">https://datasociety.net/events/what-is-work-worth/</a></p>
]]></description>
      <pubDate>Tue, 20 May 2025 19:51:43 +0000</pubDate>
      <author>events@datasociety.net (Dr. Julián Posada, Aiha Nguyen)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Recorded on May 6, 2025 at The Greene Space in NYC </p><p>Featuring Dr. Julián Posada and Aiha Nguyen </p><p>Resources and recordings are available here: <a href="https://datasociety.net/events/what-is-work-worth/" target="_blank">https://datasociety.net/events/what-is-work-worth/</a></p>
]]></content:encoded>
      <enclosure length="57906757" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/2ea7780d-290c-46f1-92ec-f80879fa7e46/audio/da930b43-a536-4b06-a984-96f82ec4e106/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>What is Work Worth? Exploring What Generative AI Means for Workers’ Lives and Labor | Keynote Event</itunes:title>
      <itunes:author>Dr. Julián Posada, Aiha Nguyen</itunes:author>
      <itunes:duration>01:00:18</itunes:duration>
      <itunes:summary>On May 6, 2025 at The Greene Space in New York City, Dr. Julián Posada and Aiha Nguyen set the stage for Data &amp; Society’s online workshop “What is Work Worth? Exploring What Generative AI Means for Workers’ Lives and Labor.” Drawing on interviews with Venezuelan data workers as well as peer research and accounts from the field, Dr. Posada’s keynote highlighted three stories that illustrate the socioeconomic conditions enabling generative AI’s development and deployment.</itunes:summary>
      <itunes:subtitle>On May 6, 2025 at The Greene Space in New York City, Dr. Julián Posada and Aiha Nguyen set the stage for Data &amp; Society’s online workshop “What is Work Worth? Exploring What Generative AI Means for Workers’ Lives and Labor.” Drawing on interviews with Venezuelan data workers as well as peer research and accounts from the field, Dr. Posada’s keynote highlighted three stories that illustrate the socioeconomic conditions enabling generative AI’s development and deployment.</itunes:subtitle>
      <itunes:keywords>data workers, generative ai, labor, work, workers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>127</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">98d29c57-5a64-412c-aae5-7a068dbe246a</guid>
      <title>[Live] The Cloud is Dead: Living with Legacies of Resource Extraction</title>
      <description><![CDATA[<p><strong>Books</strong></p><p><a href="https://bookshop.org/a/14284/9780300248272" target="_blank">Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond</a> (Tamara Kneese)</p><p><a href="https://bookshop.org/a/14284/9781250419897" target="_blank">The Pacific Circuit: A Globalized Account of the Battle for the Soul of an American City</a> (Alexis Madrigal)</p><p><a href="https://bookshop.org/a/14284/9780374538668" target="_blank">Blockchain Chicken Farm: And Other Stories of Tech in China's Countryside</a> (Xiaowei Wang)</p>
]]></description>
      <pubDate>Mon, 5 May 2025 19:48:02 +0000</pubDate>
      <author>events@datasociety.net (Alexis Madrigal, Xiaowei Wang, Tamara Kneese)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p><strong>Books</strong></p><p><a href="https://bookshop.org/a/14284/9780300248272" target="_blank">Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond</a> (Tamara Kneese)</p><p><a href="https://bookshop.org/a/14284/9781250419897" target="_blank">The Pacific Circuit: A Globalized Account of the Battle for the Soul of an American City</a> (Alexis Madrigal)</p><p><a href="https://bookshop.org/a/14284/9780374538668" target="_blank">Blockchain Chicken Farm: And Other Stories of Tech in China's Countryside</a> (Xiaowei Wang)</p>
]]></content:encoded>
      <enclosure length="57264406" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/bfcbec04-6b1e-43b6-9422-4c4d7f614732/audio/b2e56e68-0921-4e22-9e61-462a0dda3bb0/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Live] The Cloud is Dead: Living with Legacies of Resource Extraction</itunes:title>
      <itunes:author>Alexis Madrigal, Xiaowei Wang, Tamara Kneese</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/2200d616-760f-4fd1-9ac5-de6922a2bf3b/3000x3000/dands-logo-simplecast.jpg?aid=rss_feed"/>
      <itunes:duration>00:59:36</itunes:duration>
      <itunes:summary>This live fireside chat was co-organized by Data &amp; Society and Tech Workers Coalition during SF Climate Week 2025 at Tamarack, a community space located in Oakland, California. The live discussion features Tamara Kneese in dialogue with tech journalist, Alexis Madrigal, and internet researcher, Xiaowei R. Wang. The speakers reflect on the relationship between current discourses around the environmental impacts and histories of resource extraction across the AI supply chain. How are communities grappling with the material, environmental impacts of digital technologies from an international, historical perspective? How do we build solidarities of understanding between knowledge workers and directly impacted communities?</itunes:summary>
      <itunes:subtitle>This live fireside chat was co-organized by Data &amp; Society and Tech Workers Coalition during SF Climate Week 2025 at Tamarack, a community space located in Oakland, California. The live discussion features Tamara Kneese in dialogue with tech journalist, Alexis Madrigal, and internet researcher, Xiaowei R. Wang. The speakers reflect on the relationship between current discourses around the environmental impacts and histories of resource extraction across the AI supply chain. How are communities grappling with the material, environmental impacts of digital technologies from an international, historical perspective? How do we build solidarities of understanding between knowledge workers and directly impacted communities?</itunes:subtitle>
      <itunes:keywords>data, extraction, climate, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>126</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1972d09b-e524-4466-9509-5c412c2f62dd</guid>
      <title>Resisting Predatory Data | Book Talk</title>
      <description><![CDATA[<p>At the turn of the 20th century, the anti-immigration and eugenics movements used data about marginalized people to fuel racial divisions and political violence under the guise of streamlining society toward the future. Today, as the tech industry champions itself as a global leader of progress and innovation, we are falling into the same trap.</p><p>On April 10th, Anita Say Chan, author of <a href="https://www.ucpress.edu/books/predatory-data/paper" target="_blank"><i>Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future</i></a><i> </i>(UCP 2025 and open access), joined Émile P. Torres and Timnit Gebru for a discussion of the 21st century eugenics revival in big tech and how to resist it in a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem. <i>Predatory Data</i> is the first book to draw this direct line between the datafication and prediction techniques of past eugenicists and today’s often violent and extractive “big data” regimes. Torres and Gebru have also extensively studied the second wave of eugenics, identifying a suite of tech-utopian ideologies they call the <a href="https://www.dair-institute.org/tescreal/" target="_blank">TESCREAL bundle</a>.</p><p>Purchase your own copy of Anita Say Chan’s book <i>Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future</i>: <a href="https://bookshop.org/a/14284/9780520402843" target="_blank">https://bookshop.org/a/14284/9780520402843</a>.</p><p>Learn more about the event at <a href="https://datasociety.net/" target="_blank">datasociety.net</a> (<a href="https://datasociety.net/events/resisting-predatory-data/" target="_blank">https://datasociety.net/events/resisting-predatory-data/</a>).</p>
]]></description>
      <pubDate>Fri, 18 Apr 2025 17:38:28 +0000</pubDate>
      <author>events@datasociety.net (Anita Say Chen, Émile P. Torres, Timnit Gebru, Maia Woluchem)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>At the turn of the 20th century, the anti-immigration and eugenics movements used data about marginalized people to fuel racial divisions and political violence under the guise of streamlining society toward the future. Today, as the tech industry champions itself as a global leader of progress and innovation, we are falling into the same trap.</p><p>On April 10th, Anita Say Chan, author of <a href="https://www.ucpress.edu/books/predatory-data/paper" target="_blank"><i>Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future</i></a><i> </i>(UCP 2025 and open access), joined Émile P. Torres and Timnit Gebru for a discussion of the 21st century eugenics revival in big tech and how to resist it in a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem. <i>Predatory Data</i> is the first book to draw this direct line between the datafication and prediction techniques of past eugenicists and today’s often violent and extractive “big data” regimes. Torres and Gebru have also extensively studied the second wave of eugenics, identifying a suite of tech-utopian ideologies they call the <a href="https://www.dair-institute.org/tescreal/" target="_blank">TESCREAL bundle</a>.</p><p>Purchase your own copy of Anita Say Chan’s book <i>Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future</i>: <a href="https://bookshop.org/a/14284/9780520402843" target="_blank">https://bookshop.org/a/14284/9780520402843</a>.</p><p>Learn more about the event at <a href="https://datasociety.net/" target="_blank">datasociety.net</a> (<a href="https://datasociety.net/events/resisting-predatory-data/" target="_blank">https://datasociety.net/events/resisting-predatory-data/</a>).</p>
]]></content:encoded>
      <enclosure length="60331264" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/0c03dc5d-f3f6-4890-bd5a-2bdaa237f040/audio/4f701248-764a-44bd-889e-e1bb84a948fc/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Resisting Predatory Data | Book Talk</itunes:title>
      <itunes:author>Anita Say Chen, Émile P. Torres, Timnit Gebru, Maia Woluchem</itunes:author>
      <itunes:duration>01:02:41</itunes:duration>
      <itunes:summary>On April 10th, Anita Say Chan, author of Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future (UCP 2025 and open access), joined Émile P. Torres and Timnit Gebru for a discussion of the 21st century eugenics revival in big tech and how to resist it in a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem.</itunes:summary>
      <itunes:subtitle>On April 10th, Anita Say Chan, author of Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future (UCP 2025 and open access), joined Émile P. Torres and Timnit Gebru for a discussion of the 21st century eugenics revival in big tech and how to resist it in a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem.</itunes:subtitle>
      <itunes:keywords>data, eugenics, predatory data, techno-eugenics, techo-eugenists, datafication</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>125</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">a7c0278d-ec6f-45c6-a56b-5b4052723c77</guid>
      <title>AI Assistant or AI Boss? w/ Data &amp; Society</title>
      <description><![CDATA[<p>Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?</p><p>In this episode of <i>Computer Says Maybe</i>, host Alix Dunn speaks with Data & Society researchers Aiha Nguyen and Alexandra Mateescu, authors of the primer <a href="https://datasociety.net/library/generative-ai-and-labor/"><i>Generative AI and Labor: Power, Hype, and Value at Work</i></a>. They discuss how automation is now being used as a threat against workers, and how certain types of labor are being devalued by AI — especially traditionally feminized work like caregiving.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://datasociety.net/library/generative-ai-and-labor/" target="_blank">Generative AI and Labor: Power, Hype, and Value at Work</a> by Aiha Nguyen and Alexandra Mateescu</li><li><a href="https://www.hachette.co.uk/titles/brian-merchant/blood-in-the-machine/9780316487740/" target="_blank">Blood in the Machine</a> by Brain Merchant</li></ul><p><a href="https://datasociety.net/people/nguyen-aiha/" target="_blank"><i>Aiha Nguyen</i></a><i> is the Program Director for the Labor Futures Initiative at Data & Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.</i></p><p><a href="https://datasociety.net/people/mateescu-alexandra/" target="_blank"><i>Alexandra Mateescu</i></a><i> is a researcher on the Labor Futures team at the Data & Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.</i></p><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p>
]]></description>
      <pubDate>Mon, 31 Mar 2025 14:00:00 +0000</pubDate>
      <author>events@datasociety.net (Alexandra Mateescu, Alix Dunn, Aiha Nguyen)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?</p><p>In this episode of <i>Computer Says Maybe</i>, host Alix Dunn speaks with Data & Society researchers Aiha Nguyen and Alexandra Mateescu, authors of the primer <a href="https://datasociety.net/library/generative-ai-and-labor/"><i>Generative AI and Labor: Power, Hype, and Value at Work</i></a>. They discuss how automation is now being used as a threat against workers, and how certain types of labor are being devalued by AI — especially traditionally feminized work like caregiving.</p><p><strong>Further reading:</strong></p><ul><li><a href="https://datasociety.net/library/generative-ai-and-labor/" target="_blank">Generative AI and Labor: Power, Hype, and Value at Work</a> by Aiha Nguyen and Alexandra Mateescu</li><li><a href="https://www.hachette.co.uk/titles/brian-merchant/blood-in-the-machine/9780316487740/" target="_blank">Blood in the Machine</a> by Brain Merchant</li></ul><p><a href="https://datasociety.net/people/nguyen-aiha/" target="_blank"><i>Aiha Nguyen</i></a><i> is the Program Director for the Labor Futures Initiative at Data & Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.</i></p><p><a href="https://datasociety.net/people/mateescu-alexandra/" target="_blank"><i>Alexandra Mateescu</i></a><i> is a researcher on the Labor Futures team at the Data & Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.</i></p><p><a href="https://www.saysmaybe.com/newsletter"><strong>Subscribe to our newsletter</strong></a><strong> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!</strong></p>
]]></content:encoded>
      <enclosure length="41310305" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/ebe90b5f-4aec-4730-9235-1ae9e9056e19/audio/4a133e96-9988-4ddf-adeb-651fb602a1ea/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>AI Assistant or AI Boss? w/ Data &amp; Society</itunes:title>
      <itunes:author>Alexandra Mateescu, Alix Dunn, Aiha Nguyen</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/8c6c831d-96be-47ca-b2e2-37e570771e90/3000x3000/podcast-20thumbnail-20-3000x3000-1.jpg?aid=rss_feed"/>
      <itunes:duration>00:43:00</itunes:duration>
      <itunes:summary>In this episode of Computer Says Maybe, host Alix Dunn speaks with Data &amp; Society Labor Futures researchers Aiha Nguyen and Alexandra Mateescu, who recently authored Generative AI and Labor: Power, Hype, and Value at Work. They discuss how automation is now being used as a threat against workers, and how certain types of labor are being devalued by AI — especially (shocking) traditionally feminised work, such as caregiving.</itunes:summary>
      <itunes:subtitle>In this episode of Computer Says Maybe, host Alix Dunn speaks with Data &amp; Society Labor Futures researchers Aiha Nguyen and Alexandra Mateescu, who recently authored Generative AI and Labor: Power, Hype, and Value at Work. They discuss how automation is now being used as a threat against workers, and how certain types of labor are being devalued by AI — especially (shocking) traditionally feminised work, such as caregiving.</itunes:subtitle>
      <itunes:keywords>generative ai, labor</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>124</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0cdfc01f-86d6-4a22-b9d1-664de0bc16c4</guid>
      <title>Connective (t)Issues: Stories of Digitality, Infrastructures, and Resistance | Public Panel</title>
      <description><![CDATA[<p>Physical and digital infrastructures have raised tensions around the world, seeding land disputes, climate effects, and disrupting social fabrics. Yet they are also intertwined with myths of progress, transformation, and speculation. To explore these themes, we were joined by Nia Johnson, Ekene Ijeoma, and Lori Regattieri — academics, practitioners, and artists who are each, in their own way, responding to the ways digital infrastructures are transforming the built, natural, and social environments. In a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem, we broke down confrontations between technological infrastructures and local communities and discussed how to  reshape narratives of process, power, change, and futurity.</p><p>This public panel is part of Connective (t)Issues, a Data & Society workshop organized by the Trustworthy Infrastructures program in partnership with Duke Science & Society. Learn more about the workshop at <a href="https://datasociety.net/" target="_blank">datasociety.net</a>. </p><p><a href="https://datasociety.net/announcements/2024/11/20/connective-tissues/" target="_blank">https://datasociety.net/announcements/2024/11/20/connective-tissues/</a></p>
]]></description>
      <pubDate>Thu, 27 Mar 2025 21:00:00 +0000</pubDate>
      <author>events@datasociety.net (Nia Johnson, Ekene Ijeoma, Lori Regattieri, Maia Woluchem)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Physical and digital infrastructures have raised tensions around the world, seeding land disputes, climate effects, and disrupting social fabrics. Yet they are also intertwined with myths of progress, transformation, and speculation. To explore these themes, we were joined by Nia Johnson, Ekene Ijeoma, and Lori Regattieri — academics, practitioners, and artists who are each, in their own way, responding to the ways digital infrastructures are transforming the built, natural, and social environments. In a conversation moderated by Trustworthy Infrastructures Program Director Maia Woluchem, we broke down confrontations between technological infrastructures and local communities and discussed how to  reshape narratives of process, power, change, and futurity.</p><p>This public panel is part of Connective (t)Issues, a Data & Society workshop organized by the Trustworthy Infrastructures program in partnership with Duke Science & Society. Learn more about the workshop at <a href="https://datasociety.net/" target="_blank">datasociety.net</a>. </p><p><a href="https://datasociety.net/announcements/2024/11/20/connective-tissues/" target="_blank">https://datasociety.net/announcements/2024/11/20/connective-tissues/</a></p>
]]></content:encoded>
      <enclosure length="59773010" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/6ae30955-fcc6-45db-a96f-dcc6c7bd34b2/audio/ebd79e41-c513-479a-b62a-ad2464f666c3/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Connective (t)Issues: Stories of Digitality, Infrastructures, and Resistance | Public Panel</itunes:title>
      <itunes:author>Nia Johnson, Ekene Ijeoma, Lori Regattieri, Maia Woluchem</itunes:author>
      <itunes:duration>01:02:06</itunes:duration>
      <itunes:summary>Physical and digital infrastructures have raised tensions around the world, seeding land disputes, climate effects, and disrupting social fabrics. Yet they are also intertwined with myths of progress, transformation, and speculation. What does this friction reveal? How can the stories we tell about infrastructures illuminate problems and lead us toward solutions? What do these stories say about where power lies and how it shifts? How might they help surface connections across nations, communities, and cultures?</itunes:summary>
      <itunes:subtitle>Physical and digital infrastructures have raised tensions around the world, seeding land disputes, climate effects, and disrupting social fabrics. Yet they are also intertwined with myths of progress, transformation, and speculation. What does this friction reveal? How can the stories we tell about infrastructures illuminate problems and lead us toward solutions? What do these stories say about where power lies and how it shifts? How might they help surface connections across nations, communities, and cultures?</itunes:subtitle>
      <itunes:keywords>trustworthy infrastructures, metaphor, storytelling, infrastructure, research, mythmaking</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>123</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3fc59034-ca3d-41ca-b24c-7243e1a7df6a</guid>
      <title>[Databite No. 161] Red Teaming Generative AI Harm</title>
      <description><![CDATA[<p>What exactly is generative AI (genAI) red-teaming? What strategies and standards should guide its implementation? And how can it protect the public interest? In this conversation, Lama Ahmad, Camille François, Tarleton Gillespie, Briana Vecchione, and Borhane Blili-Hamelin examined red-teaming’s place in the evolving landscape of genAI evaluation and governance.</p><p>Our discussion drew on a new report by Data & Society (D&S) and AI Risk and Vulnerability Alliance (ARVA), a nonprofit that aims to empower communities to recognize, diagnose, and manage harmful flaws in AI. The report, Red-Teaming in the Public Interest, investigates how red-teaming methods are being adapted to confront uncertainty about flaws in systems and to encourage public engagement with the evaluation and oversight of genAI systems. Red-teaming offers a flexible approach to uncovering a wide range of problems with genAI models. It also offers new opportunities for incorporating diverse communities into AI governance practices.</p><p>Ultimately, we hope this report and discussion present a vision of red-teaming as an area of public interest sociotechnical experimentation.</p><p><a target="_blank">Download the report and learn more about the speakers and references at datasociety.net.</a></p><p>--</p><p>00:00 Opening</p><p>00:12 Welcome and Framing</p><p>04:48 Panel Introductions</p><p>09:34 Discussion Overview</p><p>10:23 Lama Ahmad on The Value of Human Red-Teaming</p><p>17:37 Tarleton Gillespie on Labor and Content Moderation Antecedents</p><p>25:03 Briana Vecchione on Participation & Accountability</p><p>28:25 Camille François on Global Policy and Open-source Infrastructure</p><p>35:09 Questions and Answers</p><p>56:39 Final Takeaways</p>
]]></description>
      <pubDate>Mon, 3 Mar 2025 17:00:00 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>What exactly is generative AI (genAI) red-teaming? What strategies and standards should guide its implementation? And how can it protect the public interest? In this conversation, Lama Ahmad, Camille François, Tarleton Gillespie, Briana Vecchione, and Borhane Blili-Hamelin examined red-teaming’s place in the evolving landscape of genAI evaluation and governance.</p><p>Our discussion drew on a new report by Data & Society (D&S) and AI Risk and Vulnerability Alliance (ARVA), a nonprofit that aims to empower communities to recognize, diagnose, and manage harmful flaws in AI. The report, Red-Teaming in the Public Interest, investigates how red-teaming methods are being adapted to confront uncertainty about flaws in systems and to encourage public engagement with the evaluation and oversight of genAI systems. Red-teaming offers a flexible approach to uncovering a wide range of problems with genAI models. It also offers new opportunities for incorporating diverse communities into AI governance practices.</p><p>Ultimately, we hope this report and discussion present a vision of red-teaming as an area of public interest sociotechnical experimentation.</p><p><a target="_blank">Download the report and learn more about the speakers and references at datasociety.net.</a></p><p>--</p><p>00:00 Opening</p><p>00:12 Welcome and Framing</p><p>04:48 Panel Introductions</p><p>09:34 Discussion Overview</p><p>10:23 Lama Ahmad on The Value of Human Red-Teaming</p><p>17:37 Tarleton Gillespie on Labor and Content Moderation Antecedents</p><p>25:03 Briana Vecchione on Participation & Accountability</p><p>28:25 Camille François on Global Policy and Open-source Infrastructure</p><p>35:09 Questions and Answers</p><p>56:39 Final Takeaways</p>
]]></content:encoded>
      <enclosure length="57896017" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/03b3526f-ef00-4b44-a59c-46bd616e05ee/audio/d12e0818-0183-4c33-a08b-e59040ab7bb6/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite No. 161] Red Teaming Generative AI Harm</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>01:00:09</itunes:duration>
      <itunes:summary>What exactly is generative AI (genAI) red-teaming, what strategies and standards should guide its implementation, and how can it protect the public interest? In this conversation, Lama Ahmad, Camille François, Tarleton Gillespie, Briana Vecchione, and Borhane Blili-Hamelin examine red-teaming’s place in the evolving landscape of genAI evaluation and governance.</itunes:summary>
      <itunes:subtitle>What exactly is generative AI (genAI) red-teaming, what strategies and standards should guide its implementation, and how can it protect the public interest? In this conversation, Lama Ahmad, Camille François, Tarleton Gillespie, Briana Vecchione, and Borhane Blili-Hamelin examine red-teaming’s place in the evolving landscape of genAI evaluation and governance.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>122</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0773d8f0-ce80-430c-ad0d-2c9934693c1a</guid>
      <title>The Taiwan Bottleneck w/ Brian Chen</title>
      <description><![CDATA[<p>Do you ever wonder how semiconductors (AKA chips) — the things that make up the fine tapestry of modern life — get made? And why does so much chip production bottleneck in Taiwan?</p><p>Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data & Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.</p><p><i>Brian J. Chen is the policy director of Data & Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.</i></p><p><i>Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.</i></p><p>**<a href="https://www.saysmaybe.com/newsletter">Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>
]]></description>
      <pubDate>Mon, 24 Feb 2025 12:00:00 +0000</pubDate>
      <author>events@datasociety.net (Alix Dunn, Brian Chen)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Do you ever wonder how semiconductors (AKA chips) — the things that make up the fine tapestry of modern life — get made? And why does so much chip production bottleneck in Taiwan?</p><p>Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data & Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.</p><p><i>Brian J. Chen is the policy director of Data & Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.</i></p><p><i>Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.</i></p><p>**<a href="https://www.saysmaybe.com/newsletter">Subscribe to our newsletter</a> to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**</p>
]]></content:encoded>
      <enclosure length="35675070" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/8b752dc5-991e-42d9-8488-c300d664e0b7/audio/94077ccb-cc02-4dd4-9513-d6d12771920d/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>The Taiwan Bottleneck w/ Brian Chen</itunes:title>
      <itunes:author>Alix Dunn, Brian Chen</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/ae025e0c-d519-4115-90a9-b394db43f786/3000x3000/podcast-20thumbnail-20-3000x3000-1.jpg?aid=rss_feed"/>
      <itunes:duration>00:37:09</itunes:duration>
      <itunes:summary>In time for this year’s RightsCon, Data &amp; Society Policy Director Brian J. Chen joins Alix Dunn of Computer Says Maybe for a conversation about the process of advanced chip manufacture, its entanglement with US economic policy, and the colonial making of Taiwan’s chip supremacy.</itunes:summary>
      <itunes:subtitle>In time for this year’s RightsCon, Data &amp; Society Policy Director Brian J. Chen joins Alix Dunn of Computer Says Maybe for a conversation about the process of advanced chip manufacture, its entanglement with US economic policy, and the colonial making of Taiwan’s chip supremacy.</itunes:subtitle>
      <itunes:keywords>semiconductors, chips, chip manufacture infrastructure, taiwan, chip manufacture</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>121</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">26686969-39fa-48e4-96a6-8b8bf224bdf3</guid>
      <title>Living in the Shadow of AI and Data (Code Dependent by Madhumita Murgia) | Network Book Forum</title>
      <description><![CDATA[<p>On November 14, in a conversation moderated by Data & Society Senior Researcher Ranjit Singh, Madhumita Murgia and Armin Samii discussed Murgia’s new book, <i>Code Dependent: Living in the Shadow of AI</i>. Together, they explored living with data by describing their journeys into understanding it, reporting on it, and resisting it. While Murgia’s journalistic journey began with tracing the flow of her personal data sold by data brokers, Samii used his expertise as a computer scientist to build UberCheats, an algorithm auditing tool that extracts GPS coordinates from UberEats receipts to calculate the difference between the actual miles a courier traveled and those Uber claimed they did. In <i>Code Dependent</i>, Samii’s story is the focus of a chapter on how data-driven systems come to play the role of the boss.</p><p>Purchase a copy of Code Dependent: <a href="https://bookshop.org/a/14284/9781250867391" target="_blank">https://bookshop.org/a/14284/9781250867391</a></p><p>Learn more at datasociety.net (<a href="https://datasociety.net" target="_blank">https://datasociety.net</a>)</p><p> </p>
]]></description>
      <pubDate>Tue, 19 Nov 2024 15:00:00 +0000</pubDate>
      <author>events@datasociety.net (Madhumita Murgia, Armin Samii, Ranjit Singh)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>On November 14, in a conversation moderated by Data & Society Senior Researcher Ranjit Singh, Madhumita Murgia and Armin Samii discussed Murgia’s new book, <i>Code Dependent: Living in the Shadow of AI</i>. Together, they explored living with data by describing their journeys into understanding it, reporting on it, and resisting it. While Murgia’s journalistic journey began with tracing the flow of her personal data sold by data brokers, Samii used his expertise as a computer scientist to build UberCheats, an algorithm auditing tool that extracts GPS coordinates from UberEats receipts to calculate the difference between the actual miles a courier traveled and those Uber claimed they did. In <i>Code Dependent</i>, Samii’s story is the focus of a chapter on how data-driven systems come to play the role of the boss.</p><p>Purchase a copy of Code Dependent: <a href="https://bookshop.org/a/14284/9781250867391" target="_blank">https://bookshop.org/a/14284/9781250867391</a></p><p>Learn more at datasociety.net (<a href="https://datasociety.net" target="_blank">https://datasociety.net</a>)</p><p> </p>
]]></content:encoded>
      <enclosure length="59807800" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/6fcf2f3f-0069-4e85-8674-c432facd5161/audio/94d03de6-62cf-4bf6-806f-412bbe7f0547/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Living in the Shadow of AI and Data (Code Dependent by Madhumita Murgia) | Network Book Forum</itunes:title>
      <itunes:author>Madhumita Murgia, Armin Samii, Ranjit Singh</itunes:author>
      <itunes:duration>01:02:08</itunes:duration>
      <itunes:summary>Code Dependent spans stories from across the globe and calls attention to the voices of ordinary people as they navigate the everyday challenges of living with data-driven systems and work to reclaim their agency. In the process, author Madhumita Murgia invites a deeper reflection on these systems and how they interact with human ethics and judgment. Learn more in this Data &amp; Society Network Book Forum.</itunes:summary>
      <itunes:subtitle>Code Dependent spans stories from across the globe and calls attention to the voices of ordinary people as they navigate the everyday challenges of living with data-driven systems and work to reclaim their agency. In the process, author Madhumita Murgia invites a deeper reflection on these systems and how they interact with human ethics and judgment. Learn more in this Data &amp; Society Network Book Forum.</itunes:subtitle>
      <itunes:keywords>data, book talk, global south, data colonialism, ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>120</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c3f730a4-27fb-4cd3-a51e-252f70f97d3a</guid>
      <title>Data &amp; Society at 10: Foreseeable Futures</title>
      <description><![CDATA[<p>When <a href="https://datasociety.net/" target="_blank">Data & Society</a> was founded ten years ago, it was rooted in the insight that data-centric technologies have broad and often unseen impacts on society — and that to better understand those impacts and realize technologies that reflect our highest values, we need interdisciplinary, empirical research.</p><p>Today, the urgency of that vision is palpable: How societies choose to design and govern technology will determine our collective future. On September 26, we celebrated our first decade with our incredible network of alumni, friends, and supporters. Along with reflections from Data & Society Executive Director Janet Haven, Board President Charlton McIlwain, and Founder danah boyd, the program included a panel discussion and lightning talks.</p><p>00:00 Opening<br />00:10 Welcome | Charlton McIlwain, Board President<br />08:23 Creating a Field | danah boyd, Founder<br />19:37 Lightning Talk: Xiaowei R. Wang<br />27:02 Lightning Talk: Ranjit Singh<br />33:09 Lightning Talk: Zara Rahman<br />38:42 Lightning Talk: Michelle Miller<br />46:00 Acting on What We Know | Alondra Nelson, John Palfrey, Felicia Wong (moderator: Suresh Venkatasubramanian)<br />1:13:47 Creating Our Future | Janet Haven, Executive Director<br />1:25:42 Closing | Charlton McIlwain, Board President</p>
]]></description>
      <pubDate>Wed, 16 Oct 2024 16:17:57 +0000</pubDate>
      <author>events@datasociety.net (Charlton McIlwain, Michelle Miller, Alondra Nelson, John Palfrey, Zara Rahman, Xiaowei Wang, Felicia Wong, danah boyd, Ranjit Singh, Suresh Venkatasubramanian, Janet Haven)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>When <a href="https://datasociety.net/" target="_blank">Data & Society</a> was founded ten years ago, it was rooted in the insight that data-centric technologies have broad and often unseen impacts on society — and that to better understand those impacts and realize technologies that reflect our highest values, we need interdisciplinary, empirical research.</p><p>Today, the urgency of that vision is palpable: How societies choose to design and govern technology will determine our collective future. On September 26, we celebrated our first decade with our incredible network of alumni, friends, and supporters. Along with reflections from Data & Society Executive Director Janet Haven, Board President Charlton McIlwain, and Founder danah boyd, the program included a panel discussion and lightning talks.</p><p>00:00 Opening<br />00:10 Welcome | Charlton McIlwain, Board President<br />08:23 Creating a Field | danah boyd, Founder<br />19:37 Lightning Talk: Xiaowei R. Wang<br />27:02 Lightning Talk: Ranjit Singh<br />33:09 Lightning Talk: Zara Rahman<br />38:42 Lightning Talk: Michelle Miller<br />46:00 Acting on What We Know | Alondra Nelson, John Palfrey, Felicia Wong (moderator: Suresh Venkatasubramanian)<br />1:13:47 Creating Our Future | Janet Haven, Executive Director<br />1:25:42 Closing | Charlton McIlwain, Board President</p>
]]></content:encoded>
      <enclosure length="85304352" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/b478c62f-6eda-446d-a3ad-317bfe7425aa/audio/1cbcf6b6-6bcb-4889-a892-cf759e98b837/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Data &amp; Society at 10: Foreseeable Futures</itunes:title>
      <itunes:author>Charlton McIlwain, Michelle Miller, Alondra Nelson, John Palfrey, Zara Rahman, Xiaowei Wang, Felicia Wong, danah boyd, Ranjit Singh, Suresh Venkatasubramanian, Janet Haven</itunes:author>
      <itunes:duration>01:28:51</itunes:duration>
      <itunes:summary>Foreseeable Futures, held on September 26, 2024, was a celebration of Data &amp; Society at 10, where we celebrated our first decade with our incredible network of alumni, friends, and supporters.</itunes:summary>
      <itunes:subtitle>Foreseeable Futures, held on September 26, 2024, was a celebration of Data &amp; Society at 10, where we celebrated our first decade with our incredible network of alumni, friends, and supporters.</itunes:subtitle>
      <itunes:keywords>data, foreseeable futures, futures</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>119</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4a78ec77-0f3e-417c-b9a4-a840215c5a04</guid>
      <title>[Databite 160] Black Maternal Health is in Crisis. Can Technology Help?</title>
      <description><![CDATA[<p>In the United States, Black maternal health is in steep decline. Despite increased awareness and better data about the depths of racial health disparities, outcomes for Black birthing people remain poor. At the same time, a revolution in healthcare technologies is underway, and as they provide care at the frontlines of a crisis, birth workers are figuring out how to make digital health technologies work for them and their patients.</p><p>In "<a href="https://datasociety.net/library/establishing-vigilant-care/">Establishing Vigilant Care: Data Infrastructures and the Black Birthing Experience</a>," Joan Mukogosi explores how digital health technologies can produce new forms of harm for Black birthing people — by exposing Black patients to carceral systems, creating information silos that impede interoperability, and failing to meet privacy standards. By paying close attention to how clinical contexts and their associated digital technologies impact how care is delivered, this research offers a glimpse into possibilities for improved cohesion between digital health technologies and birth work.</p><p>Learn more about Data & Society at <a href="https://datasociety.net/">datasociety.net</a>.</p>
]]></description>
      <pubDate>Mon, 22 Jul 2024 18:29:45 +0000</pubDate>
      <author>events@datasociety.net (Dr. Mary Fleming, Ijeoma Uche, Joan Mukogosi)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In the United States, Black maternal health is in steep decline. Despite increased awareness and better data about the depths of racial health disparities, outcomes for Black birthing people remain poor. At the same time, a revolution in healthcare technologies is underway, and as they provide care at the frontlines of a crisis, birth workers are figuring out how to make digital health technologies work for them and their patients.</p><p>In "<a href="https://datasociety.net/library/establishing-vigilant-care/">Establishing Vigilant Care: Data Infrastructures and the Black Birthing Experience</a>," Joan Mukogosi explores how digital health technologies can produce new forms of harm for Black birthing people — by exposing Black patients to carceral systems, creating information silos that impede interoperability, and failing to meet privacy standards. By paying close attention to how clinical contexts and their associated digital technologies impact how care is delivered, this research offers a glimpse into possibilities for improved cohesion between digital health technologies and birth work.</p><p>Learn more about Data & Society at <a href="https://datasociety.net/">datasociety.net</a>.</p>
]]></content:encoded>
      <enclosure length="56387171" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/c6fd15fd-fc0d-4d94-9a7c-a1ad2ed1a292/audio/b56bdc64-61e7-4e34-8f56-a78e2e6c7569/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite 160] Black Maternal Health is in Crisis. Can Technology Help?</itunes:title>
      <itunes:author>Dr. Mary Fleming, Ijeoma Uche, Joan Mukogosi</itunes:author>
      <itunes:duration>00:58:43</itunes:duration>
      <itunes:summary>In this Databite discussion, Mukogosi spoke with Dr. Mary Fleming and Ijeoma Uche about the facts and future of data-driven maternal care for Black patients. Reckoning with the decline of Black maternal health amid advancements in clinical technologies, they discussed the implications of an increasingly data-driven response to the Black maternal health crisis.</itunes:summary>
      <itunes:subtitle>In this Databite discussion, Mukogosi spoke with Dr. Mary Fleming and Ijeoma Uche about the facts and future of data-driven maternal care for Black patients. Reckoning with the decline of Black maternal health amid advancements in clinical technologies, they discussed the implications of an increasingly data-driven response to the Black maternal health crisis.</itunes:subtitle>
      <itunes:keywords>data, fem tech, maternal health, doulas, femtech, black maternal health, maternal health care providers, datafication, birth workers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>118</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c88dae9f-663d-486c-b4df-41befe3433f3</guid>
      <title>[Podcast] The Formalization of Social Precarities</title>
      <description><![CDATA[The Formalization of Social Precarities podcast explores platformization from the point of view of precarious gig workers in the Majority World. This conversation was moderated by Aiha Nguyen and Murali Shanmugavelan featuring the voices of Ambika Tandon, Ludmilla Costhek Abílio, and Ananya Raihan. You will also be hearing the experience of two platform workers interviewed for this project: Fatema Begum from Bangladesh and Nicolas Sauza from Brazil. Their voices are narrated in English by Data & Society staff members Iretiolu Akinrinade and Rigoberto Lara Guzmán, respectively.

This podcast was edited by Sam Grant.   
]]></description>
      <pubDate>Thu, 16 May 2024 17:20:49 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="77980828" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/c86c65e7-ab8b-4895-9c0f-be6767b3df44/audio/eaa67888-56ea-4687-8106-02901d1665d2/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Podcast] The Formalization of Social Precarities</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/8db7d6f2-b5b3-49cd-a3f8-47c7926f3625/3000x3000/dands.jpg?aid=rss_feed"/>
      <itunes:duration>01:21:13</itunes:duration>
      <itunes:summary>The Formalization of Social Precarities podcast explores platformization from the point of view of precarious gig workers in the Majority World. This conversation was moderated by Aiha Nguyen and Murali Shanmugavelan featuring the voices of Ambika Tandon, Ludmilla Costhek Abílio, and Ananya Raihan. You will also be hearing the experience of two platform workers interviewed for this project: Fatema Begum from Bangladesh and Nicolas Sauza from Brazil. Their voices are narrated in English by Data &amp; Society staff members Iretiolu Akinrinade and Rigoberto Lara Guzmán, respectively.

This podcast was edited by Sam Grant.  </itunes:summary>
      <itunes:subtitle>The Formalization of Social Precarities podcast explores platformization from the point of view of precarious gig workers in the Majority World. This conversation was moderated by Aiha Nguyen and Murali Shanmugavelan featuring the voices of Ambika Tandon, Ludmilla Costhek Abílio, and Ananya Raihan. You will also be hearing the experience of two platform workers interviewed for this project: Fatema Begum from Bangladesh and Nicolas Sauza from Brazil. Their voices are narrated in English by Data &amp; Society staff members Iretiolu Akinrinade and Rigoberto Lara Guzmán, respectively.

This podcast was edited by Sam Grant.  </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>117</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">73f6d7a2-1e3c-4d5e-a8de-7af7c4b9ba2a</guid>
      <title>[Databite 159] Doing the Work: Therapeutic Labor, Teletherapy, and the Platformization of Mental Health Care</title>
      <description><![CDATA[Data & Society’s report, Doing the Work: Therapeutic Labor, Teletherapy, and the Platformization of Mental Health Care, written by Livia Garofalo, explores how these new arrangements of therapeutic labor are affecting how therapists provide care and make a living in the US. By focusing on the experiences of providers who practice teletherapy and work for digital platforms, our research examines the fundamental tensions that emerge when a profession rooted in clinical expertise, licensing, and training standards meets the dynamics of platformization, productivity incentives, and algorithmic management. 

In this conversation, we reflected on how technology is changing the conditions of how therapists do their work, on the consequences for the present and future of therapeutic labor, and on how this might be changing our understanding of therapy itself. 
]]></description>
      <pubDate>Fri, 10 May 2024 00:00:00 +0000</pubDate>
      <author>events@datasociety.net (Linda Michaels, Melody Li, Mei Wa Kwong, Livia Garofalo)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="58454972" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/011c7e75-ec11-4a14-a818-1fa7d3074c06/audio/fee5267f-c991-4617-aec5-dcf4ebeb5a2b/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite 159] Doing the Work: Therapeutic Labor, Teletherapy, and the Platformization of Mental Health Care</itunes:title>
      <itunes:author>Linda Michaels, Melody Li, Mei Wa Kwong, Livia Garofalo</itunes:author>
      <itunes:duration>01:00:52</itunes:duration>
      <itunes:summary>Data &amp; Society’s report, Doing the Work: Therapeutic Labor, Teletherapy, and the Platformization of Mental Health Care, written by Livia Garofalo, explores how these new arrangements of therapeutic labor are affecting how therapists provide care and make a living in the US. By focusing on the experiences of providers who practice teletherapy and work for digital platforms, our research examines the fundamental tensions that emerge when a profession rooted in clinical expertise, licensing, and training standards meets the dynamics of platformization, productivity incentives, and algorithmic management. 

In this conversation, we reflected on how technology is changing the conditions of how therapists do their work, on the consequences for the present and future of therapeutic labor, and on how this might be changing our understanding of therapy itself.</itunes:summary>
      <itunes:subtitle>Data &amp; Society’s report, Doing the Work: Therapeutic Labor, Teletherapy, and the Platformization of Mental Health Care, written by Livia Garofalo, explores how these new arrangements of therapeutic labor are affecting how therapists provide care and make a living in the US. By focusing on the experiences of providers who practice teletherapy and work for digital platforms, our research examines the fundamental tensions that emerge when a profession rooted in clinical expertise, licensing, and training standards meets the dynamics of platformization, productivity incentives, and algorithmic management. 

In this conversation, we reflected on how technology is changing the conditions of how therapists do their work, on the consequences for the present and future of therapeutic labor, and on how this might be changing our understanding of therapy itself.</itunes:subtitle>
      <itunes:keywords>platformization, teletherapy platforms, mental health, mental health care, teletherapy, clinicians, clients, providers, patients</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>116</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">55202620-5aeb-47e7-8371-5e2cdadcdb7d</guid>
      <title>[Databite 158] Adaptation | Generative AI&apos;s Labor Impacts</title>
      <description><![CDATA[<p>Generative AI has seeped into many corners of our lives, and threatens to upend the economy as we know it, from education to the film industry. How do workers’ encounters with it differ from their experiences with other systems of automation? How are they similar, and how might this help us understand the shape and stakes of this latest technology?</p><p>In this three-part Databite series, Data & Society’s Labor Futures program brings together creators, platform workers, call center workers, coders, therapists, and performers for conversations with technologists, researchers, journalists, and economists to complicate the story of generative AI. By centering workers’ experiences and interrogating the relationship between generative AI and underexplored issues of hierarchy, recognition, and adaptation in labor, these interdisciplinary conversations will uncover how new technological systems are impacting worker agency and power.</p><p><a href="https://datasociety.net/events/generative-ais-labor-impacts/">Learn more about the speakers, series, and references at datasociety.net.</a></p>
]]></description>
      <pubDate>Wed, 24 Apr 2024 00:00:00 +0000</pubDate>
      <author>events@datasociety.net (Quinten Steenhuis, Jeff Freitas, Aiha Nguyen, Livia Garofalo)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Generative AI has seeped into many corners of our lives, and threatens to upend the economy as we know it, from education to the film industry. How do workers’ encounters with it differ from their experiences with other systems of automation? How are they similar, and how might this help us understand the shape and stakes of this latest technology?</p><p>In this three-part Databite series, Data & Society’s Labor Futures program brings together creators, platform workers, call center workers, coders, therapists, and performers for conversations with technologists, researchers, journalists, and economists to complicate the story of generative AI. By centering workers’ experiences and interrogating the relationship between generative AI and underexplored issues of hierarchy, recognition, and adaptation in labor, these interdisciplinary conversations will uncover how new technological systems are impacting worker agency and power.</p><p><a href="https://datasociety.net/events/generative-ais-labor-impacts/">Learn more about the speakers, series, and references at datasociety.net.</a></p>
]]></content:encoded>
      <enclosure length="56971743" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/cbcdc908-6422-41c0-b4a8-a0bace6bd09d/audio/7a8437e3-8985-4f92-b262-82e7433b463d/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite 158] Adaptation | Generative AI&apos;s Labor Impacts</itunes:title>
      <itunes:author>Quinten Steenhuis, Jeff Freitas, Aiha Nguyen, Livia Garofalo</itunes:author>
      <itunes:duration>00:59:19</itunes:duration>
      <itunes:summary>Predominant narratives that cast workers as replaceable hide the ways in which workers are actively responding to generative AI. Many build new skills and tools to their advantage while others sabotage, counteract and otherwise circumvent these systems. The relationship workers have with technology is much more dynamic, contested and layered. Narratives that cast workers as replaceable, for example, obscure the active and complex ways that workers are responding to generative AI. While many build new skills and use these tools and systems to their advantage, others sabotage, counteract, and otherwise circumvent them. In this conversation, Livia Garofalo, Jeff Freitas, Quinten Steenhuis and Data &amp; Society host Aiha Nguyen explored the ways that workers reshape their relationship with generative AI tools – and as a result, with work itself.</itunes:summary>
      <itunes:subtitle>Predominant narratives that cast workers as replaceable hide the ways in which workers are actively responding to generative AI. Many build new skills and tools to their advantage while others sabotage, counteract and otherwise circumvent these systems. The relationship workers have with technology is much more dynamic, contested and layered. Narratives that cast workers as replaceable, for example, obscure the active and complex ways that workers are responding to generative AI. While many build new skills and use these tools and systems to their advantage, others sabotage, counteract, and otherwise circumvent them. In this conversation, Livia Garofalo, Jeff Freitas, Quinten Steenhuis and Data &amp; Society host Aiha Nguyen explored the ways that workers reshape their relationship with generative AI tools – and as a result, with work itself.</itunes:subtitle>
      <itunes:keywords>generative ai</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>115</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f8a6030e-cffd-4829-b728-56ffb3dde0f3</guid>
      <title>What&apos;s Trust Got To Do With It? | &apos;Trust Issues&apos; Workshop Public Panel</title>
      <description><![CDATA[<p>This public keynote was part of <a href="https://datasociety.net/announcements/2023/11/29/trust-issues/" target="_blank">Trust Issues</a>, a Data & Society workshop organized by the <a href="https://datasociety.net/research/trustworthy-infrastructures/" target="_blank">Trustworthy Infrastructures</a> program. That team includes Sareeta Amrute, Livia Garofalo, Robyn Caplan, Joan Mukogosi, Tiara Roxanne, and Kadija Ferryman.</p>
]]></description>
      <pubDate>Thu, 28 Mar 2024 00:00:00 +0000</pubDate>
      <author>events@datasociety.net (Chelsea Peterson-Salahuddin, Irene Solaiman, Jason D&apos;Cruz, Sareeta Amrute)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>This public keynote was part of <a href="https://datasociety.net/announcements/2023/11/29/trust-issues/" target="_blank">Trust Issues</a>, a Data & Society workshop organized by the <a href="https://datasociety.net/research/trustworthy-infrastructures/" target="_blank">Trustworthy Infrastructures</a> program. That team includes Sareeta Amrute, Livia Garofalo, Robyn Caplan, Joan Mukogosi, Tiara Roxanne, and Kadija Ferryman.</p>
]]></content:encoded>
      <enclosure length="60880137" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/7584d1c6-7f64-4384-adad-ecd84ea4bf7e/audio/bc93d0a4-c856-4ed8-9a60-d00130d8c70e/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>What&apos;s Trust Got To Do With It? | &apos;Trust Issues&apos; Workshop Public Panel</itunes:title>
      <itunes:author>Chelsea Peterson-Salahuddin, Irene Solaiman, Jason D&apos;Cruz, Sareeta Amrute</itunes:author>
      <itunes:duration>01:03:05</itunes:duration>
      <itunes:summary>In a conversation moderated by D&amp;S Principal Researcher Sareeta Amrute, panelists Chelsea Peterson-Salahuddin, Irene Solaiman, and Jason D’Cruz discussed how practitioners, theorists, and community members approach the fraught issue of trust inside and outside institutions. Together, they considered how legacies of racism, dehumanization, refusal, and opacity inform trust — how it operates, and fails to operate, in data-centric spaces. They also discussed trust’s typical framing as a normative construct, as well as the meaning of mistrust and its consequences for vulnerable communities.</itunes:summary>
      <itunes:subtitle>In a conversation moderated by D&amp;S Principal Researcher Sareeta Amrute, panelists Chelsea Peterson-Salahuddin, Irene Solaiman, and Jason D’Cruz discussed how practitioners, theorists, and community members approach the fraught issue of trust inside and outside institutions. Together, they considered how legacies of racism, dehumanization, refusal, and opacity inform trust — how it operates, and fails to operate, in data-centric spaces. They also discussed trust’s typical framing as a normative construct, as well as the meaning of mistrust and its consequences for vulnerable communities.</itunes:subtitle>
      <itunes:keywords>digital media, trustworthy infrastructures, elections, marginalized communities, trust, community, representation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>114</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1a2061e6-be15-42e6-a1a4-088bea4af048</guid>
      <title>Data In/Visibility (Queer Data Studies) | Network Book Forum</title>
      <description><![CDATA[<p>Purchase your own copy of Queer Data Studies here: <a href="https://bookshop.org/a/14284/9780295751979">https://bookshop.org/a/14284/9780295751979</a>.</p>
]]></description>
      <pubDate>Fri, 23 Feb 2024 15:04:29 +0000</pubDate>
      <author>events@datasociety.net (Joan Mukogosi, Nikita Shepard, Harris Kornstein, Patrick Keilty)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Purchase your own copy of Queer Data Studies here: <a href="https://bookshop.org/a/14284/9780295751979">https://bookshop.org/a/14284/9780295751979</a>.</p>
]]></content:encoded>
      <enclosure length="58549299" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/c216d7b3-d763-4f87-a93b-ddda08b986cc/audio/21b84d67-8121-436f-b4c8-51f7650c5680/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Data In/Visibility (Queer Data Studies) | Network Book Forum</itunes:title>
      <itunes:author>Joan Mukogosi, Nikita Shepard, Harris Kornstein, Patrick Keilty</itunes:author>
      <itunes:duration>01:00:58</itunes:duration>
      <itunes:summary>Queer people have long been rendered invisible by data systems: survey questions that impose gendered binaries, inquiries that dismiss queer subjects as unimportant or insignificant, and ahistorical erasures of queer life that push queer experiences and knowledge further into the margins.

Yet visibility also comes with risk. Digital and biomedical surveillance, personal data breaches, and privacy concerns arise when indications of queerness, real or otherwise, are present and unprotected in datasets. The threat imposed by interlocking systems of anti-queer violence and oppression seeds movements away from visibility and towards fugitive tactics of refusal — a kind of strategic invisibility. 

On February 15 in a conversation moderated by Data &amp; Society Research Analyst Joan Mukogosi, Nikita Shepard and Harris Kornstein discussed this problem of data in/visibility as they explore it in their contributions to Queer Data Studies, an anthology edited by Patrick Keilty featuring essays that examine, from a range of disciplinary approaches, how data impacts queer subjects. Together, Shepard, Kornstein, and Keilty broke down the dichotomy between visibility and opacity of queer subjects in data, and engaged in the generative practice of thinking about data from and through queerness.</itunes:summary>
      <itunes:subtitle>Queer people have long been rendered invisible by data systems: survey questions that impose gendered binaries, inquiries that dismiss queer subjects as unimportant or insignificant, and ahistorical erasures of queer life that push queer experiences and knowledge further into the margins.

Yet visibility also comes with risk. Digital and biomedical surveillance, personal data breaches, and privacy concerns arise when indications of queerness, real or otherwise, are present and unprotected in datasets. The threat imposed by interlocking systems of anti-queer violence and oppression seeds movements away from visibility and towards fugitive tactics of refusal — a kind of strategic invisibility. 

On February 15 in a conversation moderated by Data &amp; Society Research Analyst Joan Mukogosi, Nikita Shepard and Harris Kornstein discussed this problem of data in/visibility as they explore it in their contributions to Queer Data Studies, an anthology edited by Patrick Keilty featuring essays that examine, from a range of disciplinary approaches, how data impacts queer subjects. Together, Shepard, Kornstein, and Keilty broke down the dichotomy between visibility and opacity of queer subjects in data, and engaged in the generative practice of thinking about data from and through queerness.</itunes:subtitle>
      <itunes:keywords>invisibility, book talk, queer data, privacy, visibility, queer data studies</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>113</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7e00af36-19be-4e89-b8a7-5179ecba8109</guid>
      <title>[Databite No. 157] Recognition | Generative AI&apos;s Labor Impacts</title>
      <description><![CDATA[<p>Generative AI has seeped into many corners of our lives, and threatens to upend the economy as we know it, from education to the film industry. How do workers’ encounters with it differ from their experiences with other systems of automation? How are they similar, and how might this help us understand the shape and stakes of this latest technology?</p><p>In this three-part Databite series, Data & Society’s Labor Futures program brings together creators, platform workers, call center workers, coders, therapists, and performers for conversations with technologists, researchers, journalists, and economists to complicate the story of generative AI. By centering workers’ experiences and interrogating the relationship between generative AI and underexplored issues of hierarchy, recognition, and adaptation in labor, these interdisciplinary conversations will uncover how new technological systems are impacting worker agency and power.</p><p><a href="https://datasociety.net/events/generative-ais-labor-impacts/">Learn more about the speakers, series, and references at datasociety.net.</a></p>
]]></description>
      <pubDate>Wed, 14 Feb 2024 04:00:00 +0000</pubDate>
      <author>events@datasociety.net (Enongo Lumumba-Kasongo, Şerife (Sherry) Wong, Sara Ziff, Aiha Nguyen)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Generative AI has seeped into many corners of our lives, and threatens to upend the economy as we know it, from education to the film industry. How do workers’ encounters with it differ from their experiences with other systems of automation? How are they similar, and how might this help us understand the shape and stakes of this latest technology?</p><p>In this three-part Databite series, Data & Society’s Labor Futures program brings together creators, platform workers, call center workers, coders, therapists, and performers for conversations with technologists, researchers, journalists, and economists to complicate the story of generative AI. By centering workers’ experiences and interrogating the relationship between generative AI and underexplored issues of hierarchy, recognition, and adaptation in labor, these interdisciplinary conversations will uncover how new technological systems are impacting worker agency and power.</p><p><a href="https://datasociety.net/events/generative-ais-labor-impacts/">Learn more about the speakers, series, and references at datasociety.net.</a></p>
]]></content:encoded>
      <enclosure length="63583154" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/b4047c0f-eb57-4852-b83b-2dffdc20623f/audio/60345f82-27f3-494e-be84-b7814d6663ae/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite No. 157] Recognition | Generative AI&apos;s Labor Impacts</itunes:title>
      <itunes:author>Enongo Lumumba-Kasongo, Şerife (Sherry) Wong, Sara Ziff, Aiha Nguyen</itunes:author>
      <itunes:duration>01:06:13</itunes:duration>
      <itunes:summary>In labor parlance, “recognition” is the pathway by which workers become a union. In what other ways can we recognize the value of work — beyond the form it takes? With artists and models finding that generative AI reduces them to their image, their words on a page, notes in a song, and even their measurements, how does this emerging technology diminish the value of workers and their contributions, and how might we recognize it? In this discussion, Enongo Lumumba-Kasongo, Şerife (Sherry) Wong, Sara Ziff, and Aiha Nguyen pry open the black box of generative AI and consider what is lost or appropriated in the process of extraction. </itunes:summary>
      <itunes:subtitle>In labor parlance, “recognition” is the pathway by which workers become a union. In what other ways can we recognize the value of work — beyond the form it takes? With artists and models finding that generative AI reduces them to their image, their words on a page, notes in a song, and even their measurements, how does this emerging technology diminish the value of workers and their contributions, and how might we recognize it? In this discussion, Enongo Lumumba-Kasongo, Şerife (Sherry) Wong, Sara Ziff, and Aiha Nguyen pry open the black box of generative AI and consider what is lost or appropriated in the process of extraction. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>112</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">86e72b9a-4b2b-4d6e-a89d-93de8c0612bc</guid>
      <title>[Databite No. 156] Hierarchy | Generative AI&apos;s Labor Impacts</title>
      <description><![CDATA[<p><strong>About the Series</strong></p><p>Generative AI has seeped into many corners of our lives, and threatens to upend the economy as we know it, from education to the film industry. How do workers’ encounters with it differ from their experiences with other systems of automation? How are they similar, and how might this help us understand the shape and stakes of this latest technology?</p><p>In this three-part Databite series, Data & Society’s Labor Futures program brings together creators, platform workers, call center workers, coders, therapists, and performers for conversations with technologists, researchers, journalists, and economists to complicate the story of generative AI. By centering workers’ experiences and interrogating the relationship between generative AI and underexplored issues of hierarchy, recognition, and adaptation in labor, these interdisciplinary conversations will uncover how new technological systems are impacting worker agency and power.</p><p><a href="https://datasociety.net/events/generative-ais-labor-impacts/">Learn more about the speakers, series, and references at datasociety.net.</a></p>
]]></description>
      <pubDate>Mon, 22 Jan 2024 09:00:00 +0000</pubDate>
      <author>events@datasociety.net (John Lopez, Milagros Miceli, Russell Brandom, Aiha Nguyen)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p><strong>About the Series</strong></p><p>Generative AI has seeped into many corners of our lives, and threatens to upend the economy as we know it, from education to the film industry. How do workers’ encounters with it differ from their experiences with other systems of automation? How are they similar, and how might this help us understand the shape and stakes of this latest technology?</p><p>In this three-part Databite series, Data & Society’s Labor Futures program brings together creators, platform workers, call center workers, coders, therapists, and performers for conversations with technologists, researchers, journalists, and economists to complicate the story of generative AI. By centering workers’ experiences and interrogating the relationship between generative AI and underexplored issues of hierarchy, recognition, and adaptation in labor, these interdisciplinary conversations will uncover how new technological systems are impacting worker agency and power.</p><p><a href="https://datasociety.net/events/generative-ais-labor-impacts/">Learn more about the speakers, series, and references at datasociety.net.</a></p>
]]></content:encoded>
      <enclosure length="58007020" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/ef445e52-2eee-40fa-8dd9-40be97359037/audio/ce2d2b59-ceba-4c30-8dcb-77dbf904dcd6/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite No. 156] Hierarchy | Generative AI&apos;s Labor Impacts</itunes:title>
      <itunes:author>John Lopez, Milagros Miceli, Russell Brandom, Aiha Nguyen</itunes:author>
      <itunes:duration>01:00:25</itunes:duration>
      <itunes:summary>Developers claim generative AI will have sweeping impacts that transform work as we know it, creating new opportunities for workers and unleashing dramatic waves of creativity. But this technology will not affect everyone equally: Societal biases and embedded hierarchies that inform who and what type of work is valuable will also influence how generative AI is rolled out and who benefits from it.   In this first conversation of a three part series, John Lopez, Milagros Miceli, and Russell Brandom join Data &amp; Society&apos;s Labor Futures Program Director Aiha Nguyen to interrogate these layers of issues around Generative AI technology; consider how it scaffolds on previous economic models, structures, and modes of employment; and explore its impacts on workers across the globe. </itunes:summary>
      <itunes:subtitle>Developers claim generative AI will have sweeping impacts that transform work as we know it, creating new opportunities for workers and unleashing dramatic waves of creativity. But this technology will not affect everyone equally: Societal biases and embedded hierarchies that inform who and what type of work is valuable will also influence how generative AI is rolled out and who benefits from it.   In this first conversation of a three part series, John Lopez, Milagros Miceli, and Russell Brandom join Data &amp; Society&apos;s Labor Futures Program Director Aiha Nguyen to interrogate these layers of issues around Generative AI technology; consider how it scaffolds on previous economic models, structures, and modes of employment; and explore its impacts on workers across the globe. </itunes:subtitle>
      <itunes:keywords>generative ai, labor, databite, technology, workers, hierarchy</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>111</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4e02939c-a69d-4b30-9636-8fe487c75dba</guid>
      <title>Caring for Digital Remains | Tamara Kneese and Tonia Sutherland | Network Book Forum</title>
      <description><![CDATA[When people die, they leave behind not only physical belongings, but digital ones. While they might have had specific wishes for what happens to their online profiles and accounts after their deaths, preserving these digital remains is complex and requires specialized forms of care. Because digital remains are attached to corporate platforms — which have control over what online legacies look like and how long they continue —  people’s digital afterlives are not necessarily the ones they would have chosen for themselves.

On November 16, Tamara Kneese and Tonia Sutherland came together for a conversation about their books, which both foreground death as a site for understanding the social values and power dynamics of our contemporary, platform-saturated world. The conversation between these two authors was moderated by Tamara K. Nopper, senior researcher with Data & Society’s Labor Futures program. Together, they explored death as a site of contestation and transformation. 
]]></description>
      <pubDate>Tue, 21 Nov 2023 20:30:01 +0000</pubDate>
      <author>events@datasociety.net (Tamara Kneese, Tonia Sutherland)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="57560289" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/609bfac4-296a-4efa-beb7-ae8f4f45fe18/audio/4ee4c760-a1d7-4eba-a494-9b8f026bf42f/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Caring for Digital Remains | Tamara Kneese and Tonia Sutherland | Network Book Forum</itunes:title>
      <itunes:author>Tamara Kneese, Tonia Sutherland</itunes:author>
      <itunes:duration>00:59:56</itunes:duration>
      <itunes:summary>When people die, they leave behind not only physical belongings, but digital ones. While they might have had specific wishes for what happens to their online profiles and accounts after their deaths, preserving these digital remains is complex and requires specialized forms of care. Because digital remains are attached to corporate platforms — which have control over what online legacies look like and how long they continue —  people’s digital afterlives are not necessarily the ones they would have chosen for themselves.

On November 16, Tamara Kneese and Tonia Sutherland came together for a conversation about their books, which both foreground death as a site for understanding the social values and power dynamics of our contemporary, platform-saturated world. The conversation between these two authors was moderated by Tamara K. Nopper, senior researcher with Data &amp; Society’s Labor Futures program. Together, they explored death as a site of contestation and transformation.</itunes:summary>
      <itunes:subtitle>When people die, they leave behind not only physical belongings, but digital ones. While they might have had specific wishes for what happens to their online profiles and accounts after their deaths, preserving these digital remains is complex and requires specialized forms of care. Because digital remains are attached to corporate platforms — which have control over what online legacies look like and how long they continue —  people’s digital afterlives are not necessarily the ones they would have chosen for themselves.

On November 16, Tamara Kneese and Tonia Sutherland came together for a conversation about their books, which both foreground death as a site for understanding the social values and power dynamics of our contemporary, platform-saturated world. The conversation between these two authors was moderated by Tamara K. Nopper, senior researcher with Data &amp; Society’s Labor Futures program. Together, they explored death as a site of contestation and transformation.</itunes:subtitle>
      <itunes:keywords>digital remains, authors, book talk, digital death, labor, digital, death, resurrection, book, commodification, black people</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>110</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1cdbdbbe-295d-45b8-88f5-3dc4cbec9fb0</guid>
      <title>Decoding the AI Executive Order</title>
      <description><![CDATA[On October 30, the White House issued its long-awaited executive order on artificial intelligence. We’re heartened by the order’s focus on some of AI’s most pressing real-world harms, and especially encouraged by its commitment to apply mandatory rights-protecting practices to the federal government’s use of AI, drawing heavily from the Blueprint for an AI Bill of Rights. A key issue now will be implementing the order’s directives, and addressing the need to put money and people quickly into action across the federal government to advance a very ambitious plan on a short timeline. 

Our November 7 at 11 a.m. ET during a special LinkedIn Live event featured analysis of the AI executive order with Data & Society’s Executive Director Janet Haven, Policy Director Brian Chen and two coauthors of the Blueprint for an AI Bill of Rights: D&S Senior Policy Fellow Sorelle Friedler and Brown University Professor and D&S Board Member Suresh Venkatasubramanian. They offered their impressions of the order, considered the implications of guidance from the Office of Management and Budget, and reflected on what’s next for policy and the field. 
]]></description>
      <pubDate>Wed, 8 Nov 2023 15:19:10 +0000</pubDate>
      <author>events@datasociety.net (Brian Chen, Sorelle Friedler, Suresh Venkatasubramanian, Janet Haven)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="56641762" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/d856144c-c20d-4d0a-85a7-5bda8581af4c/audio/41fc3438-8869-4295-b9bb-0bcd9846eefd/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Decoding the AI Executive Order</itunes:title>
      <itunes:author>Brian Chen, Sorelle Friedler, Suresh Venkatasubramanian, Janet Haven</itunes:author>
      <itunes:duration>00:59:00</itunes:duration>
      <itunes:summary>On October 30, the White House issued its long-awaited executive order on artificial intelligence. We’re heartened by the order’s focus on some of AI’s most pressing real-world harms, and especially encouraged by its commitment to apply mandatory rights-protecting practices to the federal government’s use of AI, drawing heavily from the Blueprint for an AI Bill of Rights. A key issue now will be implementing the order’s directives, and addressing the need to put money and people quickly into action across the federal government to advance a very ambitious plan on a short timeline. 

Our November 7 at 11 a.m. ET during a special LinkedIn Live event featured analysis of the AI executive order with Data &amp; Society’s Executive Director Janet Haven, Policy Director Brian Chen and two coauthors of the Blueprint for an AI Bill of Rights: D&amp;S Senior Policy Fellow Sorelle Friedler and Brown University Professor and D&amp;S Board Member Suresh Venkatasubramanian. They offered their impressions of the order, considered the implications of guidance from the Office of Management and Budget, and reflected on what’s next for policy and the field.</itunes:summary>
      <itunes:subtitle>On October 30, the White House issued its long-awaited executive order on artificial intelligence. We’re heartened by the order’s focus on some of AI’s most pressing real-world harms, and especially encouraged by its commitment to apply mandatory rights-protecting practices to the federal government’s use of AI, drawing heavily from the Blueprint for an AI Bill of Rights. A key issue now will be implementing the order’s directives, and addressing the need to put money and people quickly into action across the federal government to advance a very ambitious plan on a short timeline. 

Our November 7 at 11 a.m. ET during a special LinkedIn Live event featured analysis of the AI executive order with Data &amp; Society’s Executive Director Janet Haven, Policy Director Brian Chen and two coauthors of the Blueprint for an AI Bill of Rights: D&amp;S Senior Policy Fellow Sorelle Friedler and Brown University Professor and D&amp;S Board Member Suresh Venkatasubramanian. They offered their impressions of the order, considered the implications of guidance from the Office of Management and Budget, and reflected on what’s next for policy and the field.</itunes:subtitle>
      <itunes:keywords>executive order, federal government, white house, blueprint, ai, ai bill of rights</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>109</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">242a9704-fc0f-4080-b783-da59f5ebcfb0</guid>
      <title>[Databite 155] Democratizing AI:  Principles for Meaningful Public Participation</title>
      <description><![CDATA[As AI presents technical and engineering innovations, the systems present fundamental risks to people, their families, and their communities. Public participation in AI will not be easy. But there are foundational lessons to apply from other domains. Author and legal scholar Michele Gilman’s latest  policy brief, Democratizing AI: Principles for Meaningful Public Participation, builds on a comprehensive review of evidence from public participation efforts in anti-poverty programs and environmental policy that summarizes evidence-based recommendations for how to better structure public participation processes for AI.

To discuss the policy brief, we invited Michele Gilman to be in conversation with Harini Suresh, Assistant Professor of Computer Science at Brown University, and Richard Wingfield, Director of Technology and Human Rights at BSR. This conversation was moderated by D&S Participatory Methods Researcher, Meg Young, and D&S Policy Director, Brian Chen. 
]]></description>
      <pubDate>Tue, 24 Oct 2023 20:49:19 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="57895143" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/e7714714-9492-4e29-b448-af8c2e99b87a/audio/59a17f46-8fa8-4451-a521-ac8648ec1408/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite 155] Democratizing AI:  Principles for Meaningful Public Participation</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>01:00:18</itunes:duration>
      <itunes:summary>As AI presents technical and engineering innovations, the systems present fundamental risks to people, their families, and their communities. Public participation in AI will not be easy. But there are foundational lessons to apply from other domains. Author and legal scholar Michele Gilman’s latest  policy brief, Democratizing AI: Principles for Meaningful Public Participation, builds on a comprehensive review of evidence from public participation efforts in anti-poverty programs and environmental policy that summarizes evidence-based recommendations for how to better structure public participation processes for AI.

To discuss the policy brief, we invited Michele Gilman to be in conversation with Harini Suresh, Assistant Professor of Computer Science at Brown University, and Richard Wingfield, Director of Technology and Human Rights at BSR. This conversation was moderated by D&amp;S Participatory Methods Researcher, Meg Young, and D&amp;S Policy Director, Brian Chen.</itunes:summary>
      <itunes:subtitle>As AI presents technical and engineering innovations, the systems present fundamental risks to people, their families, and their communities. Public participation in AI will not be easy. But there are foundational lessons to apply from other domains. Author and legal scholar Michele Gilman’s latest  policy brief, Democratizing AI: Principles for Meaningful Public Participation, builds on a comprehensive review of evidence from public participation efforts in anti-poverty programs and environmental policy that summarizes evidence-based recommendations for how to better structure public participation processes for AI.

To discuss the policy brief, we invited Michele Gilman to be in conversation with Harini Suresh, Assistant Professor of Computer Science at Brown University, and Richard Wingfield, Director of Technology and Human Rights at BSR. This conversation was moderated by D&amp;S Participatory Methods Researcher, Meg Young, and D&amp;S Policy Director, Brian Chen.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>108</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">ed8b0132-7e32-42f1-9372-73a9970603a8</guid>
      <title>Network Book Forum | Disrupting DC: The Rise of Uber and the Fall of the City | Katie Wells and Kafui Attoh</title>
      <description><![CDATA[As a tech platform and a company, Uber has become emblematic of an economic shift toward precarious, low-wage gig work and declining labor standards, which has unfolded under the guise of innovation. But an overlooked dimension of Uber’s rise is how the company capitalized on deeper tensions at the heart of urban politics. In Disrupting DC: The Rise of Uber and the Fall of the City, authors Katie Wells, Kafui Attoh, and Declan Cullen tell the story of Uber as a political force, revealing how DC became a testing ground and eventual “playbook” for the company’s consolidation of power across the nation and the globe.

During our September 21 Network Book Forum, co-authors Katie Wells and Kafui Attoh discussed their book with M.R. Sauter in a conversation moderated by Data & Society researcher Alexandra Mateescu. 
]]></description>
      <pubDate>Fri, 29 Sep 2023 12:00:00 +0000</pubDate>
      <author>events@datasociety.net (Alexandra Mateescu, Katie Wells, Kafui Attoh, M.R. Sauter)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="57912886" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/c72bf7f3-758e-4749-9b19-a319b59ed419/audio/a8f16fd4-908b-4368-b5ad-596601d8671c/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Network Book Forum | Disrupting DC: The Rise of Uber and the Fall of the City | Katie Wells and Kafui Attoh</itunes:title>
      <itunes:author>Alexandra Mateescu, Katie Wells, Kafui Attoh, M.R. Sauter</itunes:author>
      <itunes:duration>01:00:19</itunes:duration>
      <itunes:summary>As a tech platform and a company, Uber has become emblematic of an economic shift toward precarious, low-wage gig work and declining labor standards, which has unfolded under the guise of innovation. But an overlooked dimension of Uber’s rise is how the company capitalized on deeper tensions at the heart of urban politics. In Disrupting DC: The Rise of Uber and the Fall of the City, authors Katie Wells, Kafui Attoh, and Declan Cullen tell the story of Uber as a political force, revealing how DC became a testing ground and eventual “playbook” for the company’s consolidation of power across the nation and the globe.

During our September 21 Network Book Forum, co-authors Katie Wells and Kafui Attoh discussed their book with M.R. Sauter in a conversation moderated by Data &amp; Society researcher Alexandra Mateescu.</itunes:summary>
      <itunes:subtitle>As a tech platform and a company, Uber has become emblematic of an economic shift toward precarious, low-wage gig work and declining labor standards, which has unfolded under the guise of innovation. But an overlooked dimension of Uber’s rise is how the company capitalized on deeper tensions at the heart of urban politics. In Disrupting DC: The Rise of Uber and the Fall of the City, authors Katie Wells, Kafui Attoh, and Declan Cullen tell the story of Uber as a political force, revealing how DC became a testing ground and eventual “playbook” for the company’s consolidation of power across the nation and the globe.

During our September 21 Network Book Forum, co-authors Katie Wells and Kafui Attoh discussed their book with M.R. Sauter in a conversation moderated by Data &amp; Society researcher Alexandra Mateescu.</itunes:subtitle>
      <itunes:keywords>private, washington dc, partnership, governance, book, city, uber, public</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>107</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1cfb8749-d45c-4894-93c1-a2537bf459df</guid>
      <title>Fellows Capstone Conversation: &quot;Make a Way&quot; | Lindsey Cameron with Sareeta Amrute</title>
      <description><![CDATA[<p>"I've always loved the term triple threat: someone who can do research, consulting, and teaching together, consulting being engaged with the world. I knew, yes, I want to be a triple threat. That's been my steadiness, or my purpose, that I've held onto for a long time." - Lindsey Cameron </p><p> “Creating your own terms for how you want to be in the world always has to be done in solidarity with others. That's why I get so much from these conversations and the fellowship.” - Sareeta Amrute </p><p>Data & Society launched Race and Technology fellowships three years ago to recognize how important questions of race, and analogous concepts like caste, are to studying, developing, and using emerging technologies. This year's fellows, Lindsey Cameron and Christina Harrington, convened interdisciplinary groups to talk through shared analysis and points of difference in their respective fields, devising nuanced ways to engage with the intersections of tech and race. </p><p>Recorded in April 2023. </p><p>Learn more at <a href="www.datasociety.net" target="_blank"> www.datasociety.net</a>. </p><p>__ </p><p>Data & Society studies the social implications of data-centric technologies, automation, and AI. Through empirical research and active engagement, our work illuminates the values and decisions that drive these systems — and shows why they must be grounded in equity and human dignity.</p>
]]></description>
      <pubDate>Mon, 7 Aug 2023 22:31:47 +0000</pubDate>
      <author>events@datasociety.net (Lindsey Cameron, Sareeta Amrute)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>"I've always loved the term triple threat: someone who can do research, consulting, and teaching together, consulting being engaged with the world. I knew, yes, I want to be a triple threat. That's been my steadiness, or my purpose, that I've held onto for a long time." - Lindsey Cameron </p><p> “Creating your own terms for how you want to be in the world always has to be done in solidarity with others. That's why I get so much from these conversations and the fellowship.” - Sareeta Amrute </p><p>Data & Society launched Race and Technology fellowships three years ago to recognize how important questions of race, and analogous concepts like caste, are to studying, developing, and using emerging technologies. This year's fellows, Lindsey Cameron and Christina Harrington, convened interdisciplinary groups to talk through shared analysis and points of difference in their respective fields, devising nuanced ways to engage with the intersections of tech and race. </p><p>Recorded in April 2023. </p><p>Learn more at <a href="www.datasociety.net" target="_blank"> www.datasociety.net</a>. </p><p>__ </p><p>Data & Society studies the social implications of data-centric technologies, automation, and AI. Through empirical research and active engagement, our work illuminates the values and decisions that drive these systems — and shows why they must be grounded in equity and human dignity.</p>
]]></content:encoded>
      <enclosure length="30248261" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/d9ec658e-9b7b-445c-89c3-f4ce00a4779f/audio/89544551-9f0a-4171-9f5c-38f48501c10e/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Fellows Capstone Conversation: &quot;Make a Way&quot; | Lindsey Cameron with Sareeta Amrute</itunes:title>
      <itunes:author>Lindsey Cameron, Sareeta Amrute</itunes:author>
      <itunes:duration>00:31:30</itunes:duration>
      <itunes:summary>2022-23 Race and Technology Fellow Lindsey Cameron discusses what drew her to her work, the convenings and connections the program has made possible, and where she&apos;ll take these ideas next with Fellowship Director Sareeta Amrute.</itunes:summary>
      <itunes:subtitle>2022-23 Race and Technology Fellow Lindsey Cameron discusses what drew her to her work, the convenings and connections the program has made possible, and where she&apos;ll take these ideas next with Fellowship Director Sareeta Amrute.</itunes:subtitle>
      <itunes:keywords>gig labor, technology</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>105</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b3cf0d1a-e21d-4e10-a405-67a5c12d7f20</guid>
      <title>Fellows Capstone Conversation: &quot;What Guides Us&quot; | Christina Harrington with Sareeta Amrute</title>
      <description><![CDATA[<p>“I always say that my research, even in the academy, has these parallel interests of thinking about how we make the technology itself more equitable, but then also thinking about -- how do we make the methods, whether they be the design methods or the research methods, more equitable and more accessible?” - Christina Harrington </p><p>Data & Society launched Race and Technology fellowships three years ago to recognize how important questions of race, and analogous concepts like caste, are to studying, developing, and using emerging technologies. This year's fellows, Lindsey Cameron and Christina Harrington, convened interdisciplinary groups to talk through shared analysis and points of difference in their respective fields, devising nuanced ways to engage with the intersections of tech and race. </p><p>Recorded in May 2023. </p><p>Learn more at <a href="www.datasociety.net" target="_blank">www.datasociety.net</a>. </p><p>__ </p><p>Data & Society studies the social implications of data-centric technologies, automation, and AI. Through empirical research and active engagement, our work illuminates the values and decisions that drive these systems — and shows why they must be grounded in equity and human dignity.</p>
]]></description>
      <pubDate>Mon, 7 Aug 2023 22:31:01 +0000</pubDate>
      <author>events@datasociety.net (Christina Harrington, Sareeta Amrute)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>“I always say that my research, even in the academy, has these parallel interests of thinking about how we make the technology itself more equitable, but then also thinking about -- how do we make the methods, whether they be the design methods or the research methods, more equitable and more accessible?” - Christina Harrington </p><p>Data & Society launched Race and Technology fellowships three years ago to recognize how important questions of race, and analogous concepts like caste, are to studying, developing, and using emerging technologies. This year's fellows, Lindsey Cameron and Christina Harrington, convened interdisciplinary groups to talk through shared analysis and points of difference in their respective fields, devising nuanced ways to engage with the intersections of tech and race. </p><p>Recorded in May 2023. </p><p>Learn more at <a href="www.datasociety.net" target="_blank">www.datasociety.net</a>. </p><p>__ </p><p>Data & Society studies the social implications of data-centric technologies, automation, and AI. Through empirical research and active engagement, our work illuminates the values and decisions that drive these systems — and shows why they must be grounded in equity and human dignity.</p>
]]></content:encoded>
      <enclosure length="27872999" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/012efe60-0947-4c24-b9ab-ebba9499af2f/audio/a5045725-3f8d-4e2d-8dfd-5a1802d29a73/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Fellows Capstone Conversation: &quot;What Guides Us&quot; | Christina Harrington with Sareeta Amrute</itunes:title>
      <itunes:author>Christina Harrington, Sareeta Amrute</itunes:author>
      <itunes:duration>00:29:02</itunes:duration>
      <itunes:summary>2022-23 Race and Technology Fellow Christina Harrington discusses what drives her work, the convenings and connections the program has made possible, and where she&apos;ll take these ideas next with Fellowship Director Sareeta Amrute. </itunes:summary>
      <itunes:subtitle>2022-23 Race and Technology Fellow Christina Harrington discusses what drives her work, the convenings and connections the program has made possible, and where she&apos;ll take these ideas next with Fellowship Director Sareeta Amrute. </itunes:subtitle>
      <itunes:keywords>participatory design, technology</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>106</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">818becce-bd2e-45b7-b474-835a9fc26a2d</guid>
      <title>[Databite 154] The Trauma of Caste in Tech: In Conversation with Thenmozhi Soundararajan</title>
      <description><![CDATA[<p>Despite the ban on untouchability 70 years ago, caste, one of the oldest systems of exclusion in the world, is thriving — impacting 1.9 billion people worldwide. And the wreckages of caste are replicated in the US and elsewhere, showing up at work, at school, in housing, and in technology, and forcing countless Dalits to live in fear of being outed.</p><p>In <i>The Trauma of Caste: A Dalit Feminist Meditation on Survivorship, Healing, and Abolition</i>, Dalit American activist Thenmozhi Soundararajan puts forth a call to awaken and act, not just for readers in South Asia, but around the world. She ties Dalit oppression to fights for liberation among Black, Indigenous, Latinx, femme, and queer communities, examining caste from a feminist, abolitionist, and Dalit Buddhist perspective — and laying bare the grief, rage, and stolen futures enacted by Brahminical social structures.</p><p>Purchase your copy of <i>The Trauma of Caste</i>: <a href="https://bookshop.org/a/14284/9781623177652" target="_blank">https://bookshop.org/a/14284/9781623177652</a></p>
]]></description>
      <pubDate>Wed, 22 Mar 2023 20:29:59 +0000</pubDate>
      <author>events@datasociety.net (Thenmozhi Soundararajan, Sareeta Amrute)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Despite the ban on untouchability 70 years ago, caste, one of the oldest systems of exclusion in the world, is thriving — impacting 1.9 billion people worldwide. And the wreckages of caste are replicated in the US and elsewhere, showing up at work, at school, in housing, and in technology, and forcing countless Dalits to live in fear of being outed.</p><p>In <i>The Trauma of Caste: A Dalit Feminist Meditation on Survivorship, Healing, and Abolition</i>, Dalit American activist Thenmozhi Soundararajan puts forth a call to awaken and act, not just for readers in South Asia, but around the world. She ties Dalit oppression to fights for liberation among Black, Indigenous, Latinx, femme, and queer communities, examining caste from a feminist, abolitionist, and Dalit Buddhist perspective — and laying bare the grief, rage, and stolen futures enacted by Brahminical social structures.</p><p>Purchase your copy of <i>The Trauma of Caste</i>: <a href="https://bookshop.org/a/14284/9781623177652" target="_blank">https://bookshop.org/a/14284/9781623177652</a></p>
]]></content:encoded>
      <enclosure length="59073736" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/66b91da1-a5a0-4be8-9a80-13d3037b8a1f/audio/2a94ad98-a722-4385-9380-21178d1c8a71/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite 154] The Trauma of Caste in Tech: In Conversation with Thenmozhi Soundararajan</itunes:title>
      <itunes:author>Thenmozhi Soundararajan, Sareeta Amrute</itunes:author>
      <itunes:duration>01:01:32</itunes:duration>
      <itunes:summary>In a book talk on March 21, Soundararajan joined Data &amp; Society Principal Researcher Sareeta Amrute to discuss her incisive and urgent work The Trauma of Caste: A Dalit Feminist Meditation on Survivorship, Healing, and Abolition, including critical questions about caste in technology.</itunes:summary>
      <itunes:subtitle>In a book talk on March 21, Soundararajan joined Data &amp; Society Principal Researcher Sareeta Amrute to discuss her incisive and urgent work The Trauma of Caste: A Dalit Feminist Meditation on Survivorship, Healing, and Abolition, including critical questions about caste in technology.</itunes:subtitle>
      <itunes:keywords>south asia, liberation, book talk, tech, caste, activism, dalit american</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>104</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">364bf98d-3672-43f6-8496-528ebaee9d20</guid>
      <title>[Databite 153] Essentially Unprotected: Health Data and Surveillance of Essential Workers during the COVID-19 Pandemic</title>
      <description><![CDATA[Data & Society’s report Essentially Unprotected is based on interviews with 50 people who worked in grocery, warehousing, manufacturing or meat and food processing during the pandemic. The report highlights their experiences and efforts to manage the confusing and often terrifying challenges of the in-person pandemic workplace. 

In this conversation featuring Angela Stuesse and Irene Tung, Amanda Lenhart and Livia Garofalo examine the social, economic, and regulatory environment that laid the groundwork for serious information gaps surrounding infections. We will explore how technology contributed to the collection of data and worsened workers’ stress and frustration — and, in select cases, facilitated information-sharing that protected workers’ privacy and addressed their fears. 

Read the report : https://datasociety.net/library/essentially-unprotected/ 
]]></description>
      <pubDate>Fri, 24 Feb 2023 16:35:51 +0000</pubDate>
      <author>events@datasociety.net (Angela Steusse, Irene Tung, Amanda Lenhart, Livia Garofalo)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="52067120" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/21d6a4df-9c4b-4164-87c0-42d6cf522ea2/audio/9084581c-cafe-46bd-a6c6-f2e279991e28/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>[Databite 153] Essentially Unprotected: Health Data and Surveillance of Essential Workers during the COVID-19 Pandemic</itunes:title>
      <itunes:author>Angela Steusse, Irene Tung, Amanda Lenhart, Livia Garofalo</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/6e578720-4142-4f32-abbd-bd03803803b1/3000x3000/data-society-logo-sqaure-whiteat2x.jpg?aid=rss_feed"/>
      <itunes:duration>00:54:13</itunes:duration>
      <itunes:summary>Data &amp; Society’s report Essentially Unprotected is based on interviews with 50 people who worked in grocery, warehousing, manufacturing or meat and food processing during the pandemic. The report highlights their experiences and efforts to manage the confusing and often terrifying challenges of the in-person pandemic workplace. 

In this conversation featuring Angela Stuesse and Irene Tung, Amanda Lenhart and Livia Garofalo examine the social, economic, and regulatory environment that laid the groundwork for serious information gaps surrounding infections. We will explore how technology contributed to the collection of data and worsened workers’ stress and frustration — and, in select cases, facilitated information-sharing that protected workers’ privacy and addressed their fears. 

Read the report : https://datasociety.net/library/essentially-unprotected/</itunes:summary>
      <itunes:subtitle>Data &amp; Society’s report Essentially Unprotected is based on interviews with 50 people who worked in grocery, warehousing, manufacturing or meat and food processing during the pandemic. The report highlights their experiences and efforts to manage the confusing and often terrifying challenges of the in-person pandemic workplace. 

In this conversation featuring Angela Stuesse and Irene Tung, Amanda Lenhart and Livia Garofalo examine the social, economic, and regulatory environment that laid the groundwork for serious information gaps surrounding infections. We will explore how technology contributed to the collection of data and worsened workers’ stress and frustration — and, in select cases, facilitated information-sharing that protected workers’ privacy and addressed their fears. 

Read the report : https://datasociety.net/library/essentially-unprotected/</itunes:subtitle>
      <itunes:keywords>data, pandemic, labor, health, workers, covid-19</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>103</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9b8d9be6-5f61-4097-9ebb-779eba6a8b17</guid>
      <title>Databite No. 152 Cuidado Digital—Derechos Reproductivos, Aborto y Redes Digitales de Cuidado en América Latina</title>
      <description><![CDATA[Desde hace mucho tiempo, el activismo en América Latina ha combatido - y en algunos casos ganado - la batalla por la libertad reproductiva. Dada la reciente revocación de Roe vs Wade, la “ola verde”, el color asociado con el movimiento para el aborto legal, seguro y gratuito que originó en Argentina y se ha expandido al resto del continente, ha llegado a los Estados Unidos. La revocación y penalización del derecho a abortar ha reanimado el debate sobre la autonomía, sobre el propio cuerpo y la información reproductiva personal, especialmente en este nuevo panorama de dataficación.

En esta conversación, Livia Garofalo, investigadora con el equipo Health + Data de Data & Society, hablará con Eugenia Ferrario, activista feminista de las Socorristas en Red en Argentina y Rebeca Ramos Duarte, abogada y directora de El Grupo de Información en Reproducción Elegida (GIRE) en México para reflexionar sobre la importancia del cuidado y la libertad reproductiva. Ponemos en el centro de este evento el concepto de “cuidado” concebido como ética y práctica de relaciones solidarias y sus manifestaciones digitales.

Este Databite fue interpretado por Claudia Alvis y Valeria Lara. 
]]></description>
      <pubDate>Mon, 21 Nov 2022 00:00:00 +0000</pubDate>
      <author>events@datasociety.net (Eugenia Ferrario, Rebeca Ramos, Valeria Lara, Claudia Alvis, Livia Garofalo)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="58615986" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/61e156f3-a69f-4eda-a7d0-eae1059ebd96/audio/8af67afa-4925-4bd4-b591-d4141c532df5/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databite No. 152 Cuidado Digital—Derechos Reproductivos, Aborto y Redes Digitales de Cuidado en América Latina</itunes:title>
      <itunes:author>Eugenia Ferrario, Rebeca Ramos, Valeria Lara, Claudia Alvis, Livia Garofalo</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/3d966a0f-58c0-4be9-80f5-1dd84d118cc3/3000x3000/dands-logo.jpg?aid=rss_feed"/>
      <itunes:duration>01:01:03</itunes:duration>
      <itunes:summary>Desde hace mucho tiempo, el activismo en América Latina ha combatido - y en algunos casos ganado - la batalla por la libertad reproductiva. Dada la reciente revocación de Roe vs Wade, la “ola verde”, el color asociado con el movimiento para el aborto legal, seguro y gratuito que originó en Argentina y se ha expandido al resto del continente, ha llegado a los Estados Unidos. La revocación y penalización del derecho a abortar ha reanimado el debate sobre la autonomía, sobre el propio cuerpo y la información reproductiva personal, especialmente en este nuevo panorama de dataficación.

En esta conversación, Livia Garofalo, investigadora con el equipo Health + Data de Data &amp; Society, hablará con Eugenia Ferrario, activista feminista de las Socorristas en Red en Argentina y Rebeca Ramos Duarte, abogada y directora de El Grupo de Información en Reproducción Elegida (GIRE) en México para reflexionar sobre la importancia del cuidado y la libertad reproductiva. Ponemos en el centro de este evento el concepto de “cuidado” concebido como ética y práctica de relaciones solidarias y sus manifestaciones digitales.

Este Databite fue interpretado por Claudia Alvis y Valeria Lara.</itunes:summary>
      <itunes:subtitle>Desde hace mucho tiempo, el activismo en América Latina ha combatido - y en algunos casos ganado - la batalla por la libertad reproductiva. Dada la reciente revocación de Roe vs Wade, la “ola verde”, el color asociado con el movimiento para el aborto legal, seguro y gratuito que originó en Argentina y se ha expandido al resto del continente, ha llegado a los Estados Unidos. La revocación y penalización del derecho a abortar ha reanimado el debate sobre la autonomía, sobre el propio cuerpo y la información reproductiva personal, especialmente en este nuevo panorama de dataficación.

En esta conversación, Livia Garofalo, investigadora con el equipo Health + Data de Data &amp; Society, hablará con Eugenia Ferrario, activista feminista de las Socorristas en Red en Argentina y Rebeca Ramos Duarte, abogada y directora de El Grupo de Información en Reproducción Elegida (GIRE) en México para reflexionar sobre la importancia del cuidado y la libertad reproductiva. Ponemos en el centro de este evento el concepto de “cuidado” concebido como ética y práctica de relaciones solidarias y sus manifestaciones digitales.

Este Databite fue interpretado por Claudia Alvis y Valeria Lara.</itunes:subtitle>
      <itunes:keywords>argentina, mexico, aborto, cuidado, américa latina, redes digitales, derechos reproductivos</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>102</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">77e09155-f41f-49b1-b887-7a3ab976501b</guid>
      <title>Databite No. 152 Cuidado Digital—Reproductive Rights, Abortion, and Digital Networks of Care in Latin America</title>
      <description><![CDATA[With the repeal of Roe v. Wade in the US, the “green wave” — a color associated with the movement for safe and legal abortion that started in Argentina and spread to the rest of the continent — has reached American shores. With it have come debates about bodily autonomy and, in an increasingly datafied landscape, ownership of personal reproductive information.

In this conversation, Livia Garofalo, researcher with Data & Society’s Health and Data team, spoke to Eugenia Ferrario, a feminist activist and educator with the abortion care network Socorristas en Red in Argentina, and Rebeca Ramos Duarte, a lawyer in Mexico and director of El Grupo de Información en Reproducción Elegida, about the significance of reproductive freedom and care in the current climate. In both English and Spanish, this conversation centers cuidado (which means “care” in Spanish) as both the means and an end to providing safe abortions, connecting activists, and understanding how the “digital” can facilitate and impede reproductive liberation.

This Databite was interpreted by Claudia Alvis and Valeria Lara. 
]]></description>
      <pubDate>Mon, 21 Nov 2022 00:00:00 +0000</pubDate>
      <author>events@datasociety.net (Eugenia Ferrario, Rebeca Ramos, Livia Garofalo, Claudia Alvis, Valeria Lara)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="58557472" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/f8b4553c-f6b5-40b6-8d52-750fb3667ca6/audio/1d11ba19-939c-4bbb-8ee0-0ac5bcae0d54/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databite No. 152 Cuidado Digital—Reproductive Rights, Abortion, and Digital Networks of Care in Latin America</itunes:title>
      <itunes:author>Eugenia Ferrario, Rebeca Ramos, Livia Garofalo, Claudia Alvis, Valeria Lara</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/21c72a93-6d43-49ea-a074-4b84810d9b14/3000x3000/dands-logo.jpg?aid=rss_feed"/>
      <itunes:duration>01:00:59</itunes:duration>
      <itunes:summary>With the repeal of Roe v. Wade in the US, the “green wave” — a color associated with the movement for safe and legal abortion that started in Argentina and spread to the rest of the continent — has reached American shores. With it have come debates about bodily autonomy and, in an increasingly datafied landscape, ownership of personal reproductive information.

In this conversation, Livia Garofalo, researcher with Data &amp; Society’s Health and Data team, spoke to Eugenia Ferrario, a feminist activist and educator with the abortion care network Socorristas en Red in Argentina, and Rebeca Ramos Duarte, a lawyer in Mexico and director of El Grupo de Información en Reproducción Elegida, about the significance of reproductive freedom and care in the current climate. In both English and Spanish, this conversation centers cuidado (which means “care” in Spanish) as both the means and an end to providing safe abortions, connecting activists, and understanding how the “digital” can facilitate and impede reproductive liberation.

This Databite was interpreted by Claudia Alvis and Valeria Lara.</itunes:summary>
      <itunes:subtitle>With the repeal of Roe v. Wade in the US, the “green wave” — a color associated with the movement for safe and legal abortion that started in Argentina and spread to the rest of the continent — has reached American shores. With it have come debates about bodily autonomy and, in an increasingly datafied landscape, ownership of personal reproductive information.

In this conversation, Livia Garofalo, researcher with Data &amp; Society’s Health and Data team, spoke to Eugenia Ferrario, a feminist activist and educator with the abortion care network Socorristas en Red in Argentina, and Rebeca Ramos Duarte, a lawyer in Mexico and director of El Grupo de Información en Reproducción Elegida, about the significance of reproductive freedom and care in the current climate. In both English and Spanish, this conversation centers cuidado (which means “care” in Spanish) as both the means and an end to providing safe abortions, connecting activists, and understanding how the “digital” can facilitate and impede reproductive liberation.

This Databite was interpreted by Claudia Alvis and Valeria Lara.</itunes:subtitle>
      <itunes:keywords>argentina, abortion, reproductive rights, mexico, latin america, cuidado, digital networks</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>102</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">44997671-a117-4576-a255-31a2b0f7f19e</guid>
      <title>Databite No. 151 Power and Retail at the Digital Doorstep</title>
      <description><![CDATA[<p>Read the Report: <a href="https://datasociety.net/library/at-the-digital-doorstep/" target="_blank">"At the Digital Doorstep"</a></p>
]]></description>
      <pubDate>Fri, 18 Nov 2022 18:46:13 +0000</pubDate>
      <author>events@datasociety.net (Aiha Nguyen, Eve Zelickson, Edward Ongweso, Moira Weigel)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Read the Report: <a href="https://datasociety.net/library/at-the-digital-doorstep/" target="_blank">"At the Digital Doorstep"</a></p>
]]></content:encoded>
      <enclosure length="58641482" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/70f22d1c-8340-48c9-9a4d-a84fcf227ccc/audio/862bb45e-4c6c-4f2c-8d70-725507f94979/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databite No. 151 Power and Retail at the Digital Doorstep</itunes:title>
      <itunes:author>Aiha Nguyen, Eve Zelickson, Edward Ongweso, Moira Weigel</itunes:author>
      <itunes:duration>01:01:05</itunes:duration>
      <itunes:summary>When an Amazon package arrives at your doorstep, it’s a non-event. Rarely do we pause to wonder about the workers involved, including the original merchant who shipped our purchase or the delivery driver who ferried it to our porch. This is no accident: logistics that appear seamless and impersonal are ones Amazon can manage at a distance. Having an army of hidden intermediaries also allows Amazon and other giant retail enterprises to set a grueling gold standard for retail interactions. 

Edward Ongweso and Moira Weigel joined us for “Power and Retail at the Digital Doorstep,” a conversation hosted by Data &amp; Society’s Labor Futures initiative, about the hidden relational dynamics that shape how we shop today. With a pair of D&amp;S reports on the changing nature of commerce as our starting point, we will discuss the experiences of two crucial but often overlooked workforces — third party sellers and delivery drivers — in an effort to trace emerging networks of control and power in retail.</itunes:summary>
      <itunes:subtitle>When an Amazon package arrives at your doorstep, it’s a non-event. Rarely do we pause to wonder about the workers involved, including the original merchant who shipped our purchase or the delivery driver who ferried it to our porch. This is no accident: logistics that appear seamless and impersonal are ones Amazon can manage at a distance. Having an army of hidden intermediaries also allows Amazon and other giant retail enterprises to set a grueling gold standard for retail interactions. 

Edward Ongweso and Moira Weigel joined us for “Power and Retail at the Digital Doorstep,” a conversation hosted by Data &amp; Society’s Labor Futures initiative, about the hidden relational dynamics that shape how we shop today. With a pair of D&amp;S reports on the changing nature of commerce as our starting point, we will discuss the experiences of two crucial but often overlooked workforces — third party sellers and delivery drivers — in an effort to trace emerging networks of control and power in retail.</itunes:subtitle>
      <itunes:keywords>doorstep, amazon, power, labor, retail</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>101</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">b13612a8-3824-4d54-83f6-d76daa43a8fc</guid>
      <title>Databite No. 150 AI in/from the Majority World – Unscripted Conversation</title>
      <description><![CDATA[<p><a href="https://datasociety.net/library/a-primer-on-ai-in-from-the-majority-world/" target="_blank">A PRIMER ON AI IN/FROM THE MAJORITY WORLD—An Empirical Site and a Standpoint</a></p>
]]></description>
      <pubDate>Tue, 20 Sep 2022 19:36:21 +0000</pubDate>
      <author>events@datasociety.net (Rigoberto Lara Guzmán, Ranjit Singh, Sareeta Amrute, Dibyadyuti Roy, Kimberly Fernandes, Nicolás Llano Linares, Soledad Magnone, Vasundhra Dahiya)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p><a href="https://datasociety.net/library/a-primer-on-ai-in-from-the-majority-world/" target="_blank">A PRIMER ON AI IN/FROM THE MAJORITY WORLD—An Empirical Site and a Standpoint</a></p>
]]></content:encoded>
      <enclosure length="46273652" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/44821672-5419-4b19-9591-7a09b058961c/audio/61f60d6f-4342-4246-9d18-c5dd6aeafe31/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databite No. 150 AI in/from the Majority World – Unscripted Conversation</itunes:title>
      <itunes:author>Rigoberto Lara Guzmán, Ranjit Singh, Sareeta Amrute, Dibyadyuti Roy, Kimberly Fernandes, Nicolás Llano Linares, Soledad Magnone, Vasundhra Dahiya</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/16a0d0cf-fa2a-4559-9d11-43d67dae8135/3000x3000/logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:48:12</itunes:duration>
      <itunes:summary>A Primer on AI in/from the Majority World is a curated collection of over 160 thematic works that serve as pathways to explore the presence of artificial intelligence and technology in the geographic regions that are home to the majority of the human population. Instead of assuming that knowledge and innovations move out of the so-called centers of Europe and the United States to the rest of the world, thinking from the “majority world” (a term coined by Bangladeshi photographer Shahidul Alam) means tracing emerging forms of knowledge, innovation, and labor in former and still-colonized spaces. “Majority world” defines a community in terms of what it has, rather than what it lacks.

The outcome of a deep collaboration between Sareeta Amrute, Ranjit Singh, and Rigoberto Lara Guzmán (and informed by a range of feminists, Indigenous thinkers, anti-caste scholars, and Afro-futurists), this primer is the latest in a collection of research from Data &amp; Society that reframes current understandings of AI and data-centric technology from a majority world perspective.</itunes:summary>
      <itunes:subtitle>A Primer on AI in/from the Majority World is a curated collection of over 160 thematic works that serve as pathways to explore the presence of artificial intelligence and technology in the geographic regions that are home to the majority of the human population. Instead of assuming that knowledge and innovations move out of the so-called centers of Europe and the United States to the rest of the world, thinking from the “majority world” (a term coined by Bangladeshi photographer Shahidul Alam) means tracing emerging forms of knowledge, innovation, and labor in former and still-colonized spaces. “Majority world” defines a community in terms of what it has, rather than what it lacks.

The outcome of a deep collaboration between Sareeta Amrute, Ranjit Singh, and Rigoberto Lara Guzmán (and informed by a range of feminists, Indigenous thinkers, anti-caste scholars, and Afro-futurists), this primer is the latest in a collection of research from Data &amp; Society that reframes current understandings of AI and data-centric technology from a majority world perspective.</itunes:subtitle>
      <itunes:keywords>extraction, afro-modernities, global south, decolonial ai, labor, migration, indigeneity, caste, social protection, ai, feminist ai, development, surveillance, colonialism, majority world, experimentation</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>100</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e5e46b06-2b9e-4379-941f-8a2af49eed0c</guid>
      <title>In Fellowship: 2021-2022 Capstone Conversation</title>
      <description><![CDATA[<p>Recorded on June 3, 2022. Learn more at www.datasociety.net.</p>
]]></description>
      <pubDate>Thu, 1 Sep 2022 00:00:00 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Recorded on June 3, 2022. Learn more at www.datasociety.net.</p>
]]></content:encoded>
      <enclosure length="85632388" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/9e95c292-6975-4551-851b-85dd14b2a33d/audio/90ca7e89-dd6b-4fc2-835e-bbf97e70d235/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>In Fellowship: 2021-2022 Capstone Conversation</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>01:29:06</itunes:duration>
      <itunes:summary>2021-2022 Fellows Chaz Arnett, Tamara K. Nopper, and Murali Shanmugavelan, and Postdoctoral Researcher Tiara Roxanne reflect on their experience at Data &amp; Society during the second year of a global pandemic in this conversation with Principal Researcher and Fellowship Director Sareeta Amrute. 

Through honing methods, expanding frameworks of understanding, developing curricula, curating panels, and swapping recipes, the cohort shares how they build their own intellectual and interpersonal community situated beyond the larger contexts of institutions and identities. </itunes:summary>
      <itunes:subtitle>2021-2022 Fellows Chaz Arnett, Tamara K. Nopper, and Murali Shanmugavelan, and Postdoctoral Researcher Tiara Roxanne reflect on their experience at Data &amp; Society during the second year of a global pandemic in this conversation with Principal Researcher and Fellowship Director Sareeta Amrute. 

Through honing methods, expanding frameworks of understanding, developing curricula, curating panels, and swapping recipes, the cohort shares how they build their own intellectual and interpersonal community situated beyond the larger contexts of institutions and identities. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>99</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">834d9d0d-c4e8-4cba-afb4-9f3455c1d1a9</guid>
      <title>Book Forum Series: Democracy&apos;s Data</title>
      <description><![CDATA[On August 11, 2022 Dan Bouk discussed his latest book, Democracy's Data: The Hidden Stories in the U.S. Census and How to Read Them, with Dr. Alex Hanna, Director of Research at the Distributed AI Research Institute. The conversation was moderated by Data & Society People and Culture Manager, Ronteau Coppin. 

The census isn’t just a data-collection process; it’s a ritual, and a tool, of American democracy. Behind every neat grid of numbers is a collage of messy, human stories—you just have to know how to read them.

In Democracy’s Data, the data historian Dan Bouk examines the 1940 U.S. census, uncovering what those numbers both condense and cleverly abstract: a universe of meaning and uncertainty, of cultural negotiation and political struggle. He introduces us to the men and women employed as census takers, bringing us with them as they go door to door, recording the lives of their neighbors. He takes us into the makeshift halls of the Census Bureau, where hundreds of civil servants, not to mention machines, labored with pencil and paper to divide and conquer the nation’s data. And he uses these little points to paint bigger pictures, such as of the ruling hand of white supremacy, the place of queer people in straight systems, and the struggle of ordinary people to be seen by the state as they see themselves.

The 1940 census is a crucial entry in American history, a controversial dataset that enabled the creation of New Deal era social programs, but that also, with the advent of World War Two, would be weaponized against many of the citizens whom it was supposed to serve. In our age of quantification, Democracy’s Data not only teaches us how to read between the lines but gives us a new perspective on the relationship between representation, identity, and governance today.
 
]]></description>
      <pubDate>Tue, 16 Aug 2022 21:47:05 +0000</pubDate>
      <author>events@datasociety.net (Dan Bouk, Ronteau Coppin, Alex Hanna)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="57573730" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/17bdee00-f49f-47b5-830a-883cdadd801b/audio/d71bac7c-72dd-4930-84b2-36175d677080/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Book Forum Series: Democracy&apos;s Data</itunes:title>
      <itunes:author>Dan Bouk, Ronteau Coppin, Alex Hanna</itunes:author>
      <itunes:duration>00:59:58</itunes:duration>
      <itunes:summary>On August 11, 2022 Dan Bouk discussed his latest book, Democracy&apos;s Data: The Hidden Stories in the U.S. Census and How to Read Them, with Dr. Alex Hanna, Director of Research at the Distributed AI Research Institute. The conversation was moderated by Data &amp; Society People and Culture Manager, Ronteau Coppin. 

The census isn’t just a data-collection process; it’s a ritual, and a tool, of American democracy. Behind every neat grid of numbers is a collage of messy, human stories—you just have to know how to read them.

In Democracy’s Data, the data historian Dan Bouk examines the 1940 U.S. census, uncovering what those numbers both condense and cleverly abstract: a universe of meaning and uncertainty, of cultural negotiation and political struggle. He introduces us to the men and women employed as census takers, bringing us with them as they go door to door, recording the lives of their neighbors. He takes us into the makeshift halls of the Census Bureau, where hundreds of civil servants, not to mention machines, labored with pencil and paper to divide and conquer the nation’s data. And he uses these little points to paint bigger pictures, such as of the ruling hand of white supremacy, the place of queer people in straight systems, and the struggle of ordinary people to be seen by the state as they see themselves.

The 1940 census is a crucial entry in American history, a controversial dataset that enabled the creation of New Deal era social programs, but that also, with the advent of World War Two, would be weaponized against many of the citizens whom it was supposed to serve. In our age of quantification, Democracy’s Data not only teaches us how to read between the lines but gives us a new perspective on the relationship between representation, identity, and governance today.
</itunes:summary>
      <itunes:subtitle>On August 11, 2022 Dan Bouk discussed his latest book, Democracy&apos;s Data: The Hidden Stories in the U.S. Census and How to Read Them, with Dr. Alex Hanna, Director of Research at the Distributed AI Research Institute. The conversation was moderated by Data &amp; Society People and Culture Manager, Ronteau Coppin. 

The census isn’t just a data-collection process; it’s a ritual, and a tool, of American democracy. Behind every neat grid of numbers is a collage of messy, human stories—you just have to know how to read them.

In Democracy’s Data, the data historian Dan Bouk examines the 1940 U.S. census, uncovering what those numbers both condense and cleverly abstract: a universe of meaning and uncertainty, of cultural negotiation and political struggle. He introduces us to the men and women employed as census takers, bringing us with them as they go door to door, recording the lives of their neighbors. He takes us into the makeshift halls of the Census Bureau, where hundreds of civil servants, not to mention machines, labored with pencil and paper to divide and conquer the nation’s data. And he uses these little points to paint bigger pictures, such as of the ruling hand of white supremacy, the place of queer people in straight systems, and the struggle of ordinary people to be seen by the state as they see themselves.

The 1940 census is a crucial entry in American history, a controversial dataset that enabled the creation of New Deal era social programs, but that also, with the advent of World War Two, would be weaponized against many of the citizens whom it was supposed to serve. In our age of quantification, Democracy’s Data not only teaches us how to read between the lines but gives us a new perspective on the relationship between representation, identity, and governance today.
</itunes:subtitle>
      <itunes:keywords>data, census</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>98</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">c5a0e269-8e5b-4a22-924d-7a936f1b574b</guid>
      <title>Book Forum Series: Experiments of the Mind</title>
      <description><![CDATA[Join author Emily Martin (Professor Emerita, Department of Anthropology, NYU), panelists Iretiolu Akinrinade (Research Analyst, Data & Society), and Noelle Stout (faculty member, Program in Advanced Teaching and Research, Apple University), and host Emanuel Moss (Join Postdoctoral Scholar, Data & Society, Cornell Tech) for a conversation around Experiments of the Mind: From the Cognitive Psychology Lab to the World of Facebook and Twitter.

Experimental cognitive psychology research is a hidden force in our online lives. We engage with it, often unknowingly, whenever we download a health app, complete a Facebook quiz, or rate our latest purchase. How did experimental psychology come to play an outsized role in these developments? Experiments of the Mind considers this question through a look at cognitive psychology laboratories. Emily Martin traces how psychological research methods evolved, escaped the boundaries of the discipline, and infiltrated social media and our digital universe.

Martin recounts her participation in psychology labs, and she conveys their activities through the voices of principal investigators, graduate students, and subjects. Despite claims of experimental psychology's focus on isolated individuals, Martin finds that the history of the field--from early German labs to Gestalt psychology--has led to research methods that are, in fact, highly social. She shows how these methods are deployed online: amplified by troves of data and powerful machine learning, an unprecedented model of human psychology is now widespread--one in which statistical measures are paired with algorithms to predict and influence users' behavior.

Experiments of the Mind examines how psychology research has shaped us to be perfectly suited for our networked age. 
]]></description>
      <pubDate>Wed, 15 Jun 2022 12:05:00 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="56400323" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/ba2163c2-b00f-4e2b-8833-198a2e7e15f5/audio/4e7ae32d-26e2-45ad-a813-9badda55464b/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Book Forum Series: Experiments of the Mind</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:58:39</itunes:duration>
      <itunes:summary>Join author Emily Martin (Professor Emerita, Department of Anthropology, NYU), panelists Iretiolu Akinrinade (Research Analyst, Data &amp; Society), and Noelle Stout (faculty member, Program in Advanced Teaching and Research, Apple University), and host Emanuel Moss (Join Postdoctoral Scholar, Data &amp; Society, Cornell Tech) for a conversation around Experiments of the Mind: From the Cognitive Psychology Lab to the World of Facebook and Twitter.

Experimental cognitive psychology research is a hidden force in our online lives. We engage with it, often unknowingly, whenever we download a health app, complete a Facebook quiz, or rate our latest purchase. How did experimental psychology come to play an outsized role in these developments? Experiments of the Mind considers this question through a look at cognitive psychology laboratories. Emily Martin traces how psychological research methods evolved, escaped the boundaries of the discipline, and infiltrated social media and our digital universe.

Martin recounts her participation in psychology labs, and she conveys their activities through the voices of principal investigators, graduate students, and subjects. Despite claims of experimental psychology&apos;s focus on isolated individuals, Martin finds that the history of the field--from early German labs to Gestalt psychology--has led to research methods that are, in fact, highly social. She shows how these methods are deployed online: amplified by troves of data and powerful machine learning, an unprecedented model of human psychology is now widespread--one in which statistical measures are paired with algorithms to predict and influence users&apos; behavior.

Experiments of the Mind examines how psychology research has shaped us to be perfectly suited for our networked age.</itunes:summary>
      <itunes:subtitle>Join author Emily Martin (Professor Emerita, Department of Anthropology, NYU), panelists Iretiolu Akinrinade (Research Analyst, Data &amp; Society), and Noelle Stout (faculty member, Program in Advanced Teaching and Research, Apple University), and host Emanuel Moss (Join Postdoctoral Scholar, Data &amp; Society, Cornell Tech) for a conversation around Experiments of the Mind: From the Cognitive Psychology Lab to the World of Facebook and Twitter.

Experimental cognitive psychology research is a hidden force in our online lives. We engage with it, often unknowingly, whenever we download a health app, complete a Facebook quiz, or rate our latest purchase. How did experimental psychology come to play an outsized role in these developments? Experiments of the Mind considers this question through a look at cognitive psychology laboratories. Emily Martin traces how psychological research methods evolved, escaped the boundaries of the discipline, and infiltrated social media and our digital universe.

Martin recounts her participation in psychology labs, and she conveys their activities through the voices of principal investigators, graduate students, and subjects. Despite claims of experimental psychology&apos;s focus on isolated individuals, Martin finds that the history of the field--from early German labs to Gestalt psychology--has led to research methods that are, in fact, highly social. She shows how these methods are deployed online: amplified by troves of data and powerful machine learning, an unprecedented model of human psychology is now widespread--one in which statistical measures are paired with algorithms to predict and influence users&apos; behavior.

Experiments of the Mind examines how psychology research has shaped us to be perfectly suited for our networked age.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>97</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">eb7a8c40-a8d3-426b-9651-fdf1ee6e6e74</guid>
      <title>Book Forum Series: Nice White Ladies</title>
      <description><![CDATA[Nice White Ladies: The Truth About White Supremacy, Our Role in it, and How We Can Help Dismantle it, by D&S 2018-2019 Fellow Jessie Daniels, and hosted by Principal Researcher & Race and Tech Program Director Sareeta Amrute
--
Named a Best Book of 2021 by Kirkus

An acclaimed expert illuminates the distinctive role that white women play in perpetuating racism, and how they can work to fight it.

In a nation deeply divided by race, the “Karens” of the world are easy to villainize. But in Nice White Ladies, Jessie Daniels addresses the unintended complicity of even well-meaning white women. She reveals how their everyday choices harm communities of color. White mothers, still expected to be the primary parents, too often uncritically choose to send their kids to the “best” schools, collectively leading to a return to segregation. She addresses a feminism that pushes women of color aside, and a wellness industry that insulates white women in a bubble of their own privilege.

Daniels then charts a better path forward. She looks to the white women who fight neo-Nazis online and in the streets, and who challenge all-white spaces from workplaces to schools to neighborhoods. In the end, she shows how her fellow white women can work toward true equality for all. 
]]></description>
      <pubDate>Tue, 7 Jun 2022 10:00:00 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="84522643" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/0fc1dea4-7f0e-4e08-b26b-f3a4e3a5cee4/audio/78f9e55c-e156-420b-84b4-737e12f5187a/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Book Forum Series: Nice White Ladies</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:58:43</itunes:duration>
      <itunes:summary>Nice White Ladies: The Truth About White Supremacy, Our Role in it, and How We Can Help Dismantle it, by D&amp;S 2018-2019 Fellow Jessie Daniels, and hosted by Principal Researcher &amp; Race and Tech Program Director Sareeta Amrute
--
Named a Best Book of 2021 by Kirkus

An acclaimed expert illuminates the distinctive role that white women play in perpetuating racism, and how they can work to fight it.

In a nation deeply divided by race, the “Karens” of the world are easy to villainize. But in Nice White Ladies, Jessie Daniels addresses the unintended complicity of even well-meaning white women. She reveals how their everyday choices harm communities of color. White mothers, still expected to be the primary parents, too often uncritically choose to send their kids to the “best” schools, collectively leading to a return to segregation. She addresses a feminism that pushes women of color aside, and a wellness industry that insulates white women in a bubble of their own privilege.

Daniels then charts a better path forward. She looks to the white women who fight neo-Nazis online and in the streets, and who challenge all-white spaces from workplaces to schools to neighborhoods. In the end, she shows how her fellow white women can work toward true equality for all.</itunes:summary>
      <itunes:subtitle>Nice White Ladies: The Truth About White Supremacy, Our Role in it, and How We Can Help Dismantle it, by D&amp;S 2018-2019 Fellow Jessie Daniels, and hosted by Principal Researcher &amp; Race and Tech Program Director Sareeta Amrute
--
Named a Best Book of 2021 by Kirkus

An acclaimed expert illuminates the distinctive role that white women play in perpetuating racism, and how they can work to fight it.

In a nation deeply divided by race, the “Karens” of the world are easy to villainize. But in Nice White Ladies, Jessie Daniels addresses the unintended complicity of even well-meaning white women. She reveals how their everyday choices harm communities of color. White mothers, still expected to be the primary parents, too often uncritically choose to send their kids to the “best” schools, collectively leading to a return to segregation. She addresses a feminism that pushes women of color aside, and a wellness industry that insulates white women in a bubble of their own privilege.

Daniels then charts a better path forward. She looks to the white women who fight neo-Nazis online and in the streets, and who challenge all-white spaces from workplaces to schools to neighborhoods. In the end, she shows how her fellow white women can work toward true equality for all.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>96</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e2b48338-4cb4-4ac1-b394-81d3ade127a9</guid>
      <title>Conversations on the Datafied State – Part Three: Race, Surveillance, Resistance</title>
      <description><![CDATA[Tamara K. Nopper and Chaz Arnett in conversation with Raúl Carrillo and Alyx Goodwin

This panel focuses attention on how datafication processes are related to social control and surveillance, whether policing and the criminal punishment system or credit scoring systems and monitoring the use of cash. State power is expanded through the widening net of surveillance and the use of tools of automated detection and enforcement, which maintains racial and class hierarchies. Our panel also examines how communities and organizations are resisting the datafied state and its particular impact on Black and people of color communities, including efforts to regulate data collection, politically organize against harmful data initiatives, or propose policies that attempt more ethical data processes.  
]]></description>
      <pubDate>Wed, 1 Jun 2022 20:18:27 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="64897015" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/f370b399-3548-4db0-ac41-026196418400/audio/35be8a4d-5a2f-4a43-8a70-c1ba6fa21dea/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Conversations on the Datafied State – Part Three: Race, Surveillance, Resistance</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/9da9b399-92e8-4fa8-b45b-6b912600f720/3000x3000/dands.jpg?aid=rss_feed"/>
      <itunes:duration>01:07:30</itunes:duration>
      <itunes:summary>Tamara K. Nopper and Chaz Arnett in conversation with Raúl Carrillo and Alyx Goodwin

This panel focuses attention on how datafication processes are related to social control and surveillance, whether policing and the criminal punishment system or credit scoring systems and monitoring the use of cash. State power is expanded through the widening net of surveillance and the use of tools of automated detection and enforcement, which maintains racial and class hierarchies. Our panel also examines how communities and organizations are resisting the datafied state and its particular impact on Black and people of color communities, including efforts to regulate data collection, politically organize against harmful data initiatives, or propose policies that attempt more ethical data processes. </itunes:summary>
      <itunes:subtitle>Tamara K. Nopper and Chaz Arnett in conversation with Raúl Carrillo and Alyx Goodwin

This panel focuses attention on how datafication processes are related to social control and surveillance, whether policing and the criminal punishment system or credit scoring systems and monitoring the use of cash. State power is expanded through the widening net of surveillance and the use of tools of automated detection and enforcement, which maintains racial and class hierarchies. Our panel also examines how communities and organizations are resisting the datafied state and its particular impact on Black and people of color communities, including efforts to regulate data collection, politically organize against harmful data initiatives, or propose policies that attempt more ethical data processes. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>95</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">aebfe33b-8f31-449b-af36-4092e6bf620f</guid>
      <title>Conversations on the Datafied State – Part Two: The Automated State</title>
      <description><![CDATA[Ranjit Singh, Researcher at D&S, in conversation with Joanna Redden, Assistant Professor at the Faculty of Information & Media Studies at the University of Western Ontario and the co-director of the Data Justice Lab, and Michele E. Gilman, Venerable Professor of Law and Associate Dean for Faculty Research and Development at the University of Baltimore School of Law and director of the Civil Advocacy Clinic. 

The automated state is one that seeks to replace human workers with machines. There are three general motivations. One, the desire to leverage computational speed to handle rote and routine work more efficiently. Two, the desire to improve the accuracy, fairness, or consistency of decision-making in light of human fallibility. Three, the desire to depoliticize decision-making (or appear to) by placing it out of reach of human discretion. These motivations, however, raise distinctive concerns about oversight and the ability to seek recourse in the case of errors or bugs in decision-making. What does interacting with systems, interfaces, and datasets require of people interfacing with the Datafied State? What literacies are necessary? What room is there for voice? What does automation look like in practice? Who is rendered invisible when showcasing success in automating state practices?

About Data & Society 
Data & Society is an independent nonprofit research organization. We believe that empirical evidence should directly inform the development and governance of new technology. We study the social implications of data and automation, producing original research to ground informed, evidence-based public debate about emerging technology.

About the Series
The Datafied State is an emerging research agenda that seeks to explore the relationship between datafication and public administration. It is concerned with the proliferation of data sources, infrastructures, and computational techniques adopted across the public sector. The processes through which governments procure, develop, implement, and legally mandate the use of digital and computational systems are increasingly blurring the boundaries between what is considered public and private. So, how datafied is the state today? How can we find out?   
]]></description>
      <pubDate>Fri, 13 May 2022 15:27:40 +0000</pubDate>
      <author>events@datasociety.net (Joanna Redden, Ranjit Singh, Michele Gilman)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="86742575" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/769dd830-e0e5-4eb7-9256-11013a94d016/audio/656f9e7d-626d-4176-8838-0133176ec677/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Conversations on the Datafied State – Part Two: The Automated State</itunes:title>
      <itunes:author>Joanna Redden, Ranjit Singh, Michele Gilman</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/ac9204ee-aa92-456d-bdc5-4327c8190f82/3000x3000/dands.jpg?aid=rss_feed"/>
      <itunes:duration>01:00:15</itunes:duration>
      <itunes:summary>Ranjit Singh, Researcher at D&amp;S, in conversation with Joanna Redden, Assistant Professor at the Faculty of Information &amp; Media Studies at the University of Western Ontario and the co-director of the Data Justice Lab, and Michele E. Gilman, Venerable Professor of Law and Associate Dean for Faculty Research and Development at the University of Baltimore School of Law and director of the Civil Advocacy Clinic. 

The automated state is one that seeks to replace human workers with machines. There are three general motivations. One, the desire to leverage computational speed to handle rote and routine work more efficiently. Two, the desire to improve the accuracy, fairness, or consistency of decision-making in light of human fallibility. Three, the desire to depoliticize decision-making (or appear to) by placing it out of reach of human discretion. These motivations, however, raise distinctive concerns about oversight and the ability to seek recourse in the case of errors or bugs in decision-making. What does interacting with systems, interfaces, and datasets require of people interfacing with the Datafied State? What literacies are necessary? What room is there for voice? What does automation look like in practice? Who is rendered invisible when showcasing success in automating state practices?

About Data &amp; Society 
Data &amp; Society is an independent nonprofit research organization. We believe that empirical evidence should directly inform the development and governance of new technology. We study the social implications of data and automation, producing original research to ground informed, evidence-based public debate about emerging technology.

About the Series
The Datafied State is an emerging research agenda that seeks to explore the relationship between datafication and public administration. It is concerned with the proliferation of data sources, infrastructures, and computational techniques adopted across the public sector. The processes through which governments procure, develop, implement, and legally mandate the use of digital and computational systems are increasingly blurring the boundaries between what is considered public and private. So, how datafied is the state today? How can we find out?  </itunes:summary>
      <itunes:subtitle>Ranjit Singh, Researcher at D&amp;S, in conversation with Joanna Redden, Assistant Professor at the Faculty of Information &amp; Media Studies at the University of Western Ontario and the co-director of the Data Justice Lab, and Michele E. Gilman, Venerable Professor of Law and Associate Dean for Faculty Research and Development at the University of Baltimore School of Law and director of the Civil Advocacy Clinic. 

The automated state is one that seeks to replace human workers with machines. There are three general motivations. One, the desire to leverage computational speed to handle rote and routine work more efficiently. Two, the desire to improve the accuracy, fairness, or consistency of decision-making in light of human fallibility. Three, the desire to depoliticize decision-making (or appear to) by placing it out of reach of human discretion. These motivations, however, raise distinctive concerns about oversight and the ability to seek recourse in the case of errors or bugs in decision-making. What does interacting with systems, interfaces, and datasets require of people interfacing with the Datafied State? What literacies are necessary? What room is there for voice? What does automation look like in practice? Who is rendered invisible when showcasing success in automating state practices?

About Data &amp; Society 
Data &amp; Society is an independent nonprofit research organization. We believe that empirical evidence should directly inform the development and governance of new technology. We study the social implications of data and automation, producing original research to ground informed, evidence-based public debate about emerging technology.

About the Series
The Datafied State is an emerging research agenda that seeks to explore the relationship between datafication and public administration. It is concerned with the proliferation of data sources, infrastructures, and computational techniques adopted across the public sector. The processes through which governments procure, develop, implement, and legally mandate the use of digital and computational systems are increasingly blurring the boundaries between what is considered public and private. So, how datafied is the state today? How can we find out?  </itunes:subtitle>
      <itunes:keywords>automation, datafied state</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>94</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">3b2df42d-c274-4ab0-8e9e-feba85d55bd0</guid>
      <title>Conversations on the Datafied State – Part One: What is the Public Interest?</title>
      <description><![CDATA[Jenna Burrell, Director of Research at Data & Society, in conversation with Anne Washington, Assistant Professor of Data Policy at NYU, and Deirdre Mulligan, Professor in the School of Information at UC Berkeley.

Part one in a series of three Conversations on The Datafied State.

The role of government is distinct from the private sector. Governments serve the public and prioritize values beyond market fit and return on investment. Governments interface with advocacy groups, unions, and other publics and not just individuals. In their approach to solving problems using computational, data-driven systems, governments have an opportunity to model responsible, accountable, and accessible tech. But what exactly would it mean for that tech to be in “the public interest,” and how are such publics constituted?  
]]></description>
      <pubDate>Wed, 4 May 2022 20:37:53 +0000</pubDate>
      <author>events@datasociety.net (Jenna Burrell, Anne Washington, Deirdre Mulligan)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="84395578" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/ceb74846-fb6e-4a08-b8c7-138b5cb4ee45/audio/03a89d13-e643-493b-b93f-bad05451b6db/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Conversations on the Datafied State – Part One: What is the Public Interest?</itunes:title>
      <itunes:author>Jenna Burrell, Anne Washington, Deirdre Mulligan</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/bd9c2f5e-3143-44b6-a967-ef57082d4b52/3000x3000/dands.jpg?aid=rss_feed"/>
      <itunes:duration>00:58:37</itunes:duration>
      <itunes:summary>Jenna Burrell, Director of Research at Data &amp; Society, in conversation with Anne Washington, Assistant Professor of Data Policy at NYU, and Deirdre Mulligan, Professor in the School of Information at UC Berkeley.

Part one in a series of three Conversations on The Datafied State.

The role of government is distinct from the private sector. Governments serve the public and prioritize values beyond market fit and return on investment. Governments interface with advocacy groups, unions, and other publics and not just individuals. In their approach to solving problems using computational, data-driven systems, governments have an opportunity to model responsible, accountable, and accessible tech. But what exactly would it mean for that tech to be in “the public interest,” and how are such publics constituted? </itunes:summary>
      <itunes:subtitle>Jenna Burrell, Director of Research at Data &amp; Society, in conversation with Anne Washington, Assistant Professor of Data Policy at NYU, and Deirdre Mulligan, Professor in the School of Information at UC Berkeley.

Part one in a series of three Conversations on The Datafied State.

The role of government is distinct from the private sector. Governments serve the public and prioritize values beyond market fit and return on investment. Governments interface with advocacy groups, unions, and other publics and not just individuals. In their approach to solving problems using computational, data-driven systems, governments have an opportunity to model responsible, accountable, and accessible tech. But what exactly would it mean for that tech to be in “the public interest,” and how are such publics constituted? </itunes:subtitle>
      <itunes:keywords>public interest, datafication, datafied state</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>93</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7c57b7e5-f6af-48db-a220-c13aef52f9b6</guid>
      <title>Network Book Forum: Patching Development</title>
      <description><![CDATA[ 
]]></description>
      <pubDate>Tue, 22 Mar 2022 15:05:00 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="62786801" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/a961401f-21e6-4485-b395-979d6ec8f34f/audio/a973351d-bff3-47a7-b940-61c405e267b2/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Network Book Forum: Patching Development</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>01:05:25</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>92</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">cece03ae-33e2-41c6-b55a-460d900ae378</guid>
      <title>Network Book Forum: Digital Black Feminism</title>
      <description><![CDATA[ 
]]></description>
      <pubDate>Mon, 14 Feb 2022 22:36:13 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="61192705" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/527905c6-b5ae-4b5f-aeb3-cc2dc0585181/audio/5c088319-017d-4a0d-9b8b-48db868ff424/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Network Book Forum: Digital Black Feminism</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>01:03:44</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>91</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">4268d178-9b12-406c-9405-a98d80c1bb0d</guid>
      <title>Network Book Forum: The Distance Cure</title>
      <description><![CDATA[ 
]]></description>
      <pubDate>Wed, 26 Jan 2022 19:01:11 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <enclosure length="57286458" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/04fe7d96-90fd-499f-b0ed-c68c3eb95ab9/audio/95814133-9c27-4489-8845-e0407924d52b/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Network Book Forum: The Distance Cure</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:59:40</itunes:duration>
      <itunes:summary></itunes:summary>
      <itunes:subtitle></itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>90</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1b6f00e5-1368-4a0c-ac4c-2fb298c44723</guid>
      <title>Databite No. 146: Moving Through Molasses: On Intellectual labor, Productivity, and Belonging</title>
      <description><![CDATA[<p><a href="https://datasociety.net/people/clark-meredith-d/"><strong>Meredith D. Clark, Ph.D. </strong></a>is an assistant professor in the Media Studies department at the University of Virginia. Her professional journalism background informs her primary research on the relationships between Black communities and news media in social media spaces. Her secondary research in critical journalism studies addresses questions of systemic racism in U.S. news media, with a focus on culture and processes in print and digital newsrooms. Her current work contextualizes Black Americans’ use of Twitter to create digital counter-narratives to mainstream news coverage of Black lived experiences as contemporary forms of resistance.</p><p><a href="https://datasociety.net/people/mcglotten-shaka/"><strong>Shaka McGlotten</strong></a> is Professor of Media Studies and Anthropology at Purchase College-SUNY, where they also serve as Chair of the Gender Studies and Global Black Studies Programs. Their work stages encounters between Black study, queer theory, media, and art. Their research focuses on networked intimacies and messy computational entanglements as they interface with QTPOC lifeworlds.</p>
]]></description>
      <pubDate>Mon, 30 Aug 2021 20:23:20 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p><a href="https://datasociety.net/people/clark-meredith-d/"><strong>Meredith D. Clark, Ph.D. </strong></a>is an assistant professor in the Media Studies department at the University of Virginia. Her professional journalism background informs her primary research on the relationships between Black communities and news media in social media spaces. Her secondary research in critical journalism studies addresses questions of systemic racism in U.S. news media, with a focus on culture and processes in print and digital newsrooms. Her current work contextualizes Black Americans’ use of Twitter to create digital counter-narratives to mainstream news coverage of Black lived experiences as contemporary forms of resistance.</p><p><a href="https://datasociety.net/people/mcglotten-shaka/"><strong>Shaka McGlotten</strong></a> is Professor of Media Studies and Anthropology at Purchase College-SUNY, where they also serve as Chair of the Gender Studies and Global Black Studies Programs. Their work stages encounters between Black study, queer theory, media, and art. Their research focuses on networked intimacies and messy computational entanglements as they interface with QTPOC lifeworlds.</p>
]]></content:encoded>
      <enclosure length="52638475" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/994a7aa3-54b3-486e-82f5-43d9dcab37bd/audio/c6f0277b-d0f0-43c2-8933-fac846392187/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databite No. 146: Moving Through Molasses: On Intellectual labor, Productivity, and Belonging</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:54:50</itunes:duration>
      <itunes:summary>In this conversation, 2020-2021 Faculty Fellows Meredith D. Clark and Shaka McGlotten reflect on their experience at Data &amp; Society during a global pandemic. Moving through molasses invokes the challenges of adapting to the existential and emotional fatigue of incessant telepresence interfaces, performative intellectual labor, and the need to balance a professional career amidst ongoing collapse. The talk offers insights on how Queer, Black scholars can move through institutions without forsaking authenticity. It makes a case for productivity refusal as a generative tactic for self-preservation. Molasses as a decelerator medicine for the slow, somatic surrender our bodies need. Molasses also alludes to the thick absurdity of online discourse with its noxious disinformation feeds and the inevitable co-optation of Black vernacular content creation. Despite these traps, one can cultivate community in platform mediated digital spaces. Often, the virtual sense of belonging is enough to get by.</itunes:summary>
      <itunes:subtitle>In this conversation, 2020-2021 Faculty Fellows Meredith D. Clark and Shaka McGlotten reflect on their experience at Data &amp; Society during a global pandemic. Moving through molasses invokes the challenges of adapting to the existential and emotional fatigue of incessant telepresence interfaces, performative intellectual labor, and the need to balance a professional career amidst ongoing collapse. The talk offers insights on how Queer, Black scholars can move through institutions without forsaking authenticity. It makes a case for productivity refusal as a generative tactic for self-preservation. Molasses as a decelerator medicine for the slow, somatic surrender our bodies need. Molasses also alludes to the thick absurdity of online discourse with its noxious disinformation feeds and the inevitable co-optation of Black vernacular content creation. Despite these traps, one can cultivate community in platform mediated digital spaces. Often, the virtual sense of belonging is enough to get by.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>89</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">7cdafb68-684b-4f8c-b789-7c5eac2f4205</guid>
      <title>Algorithmic Governance and the State of Impact Assessment  in the EU, US, and Canada</title>
      <description><![CDATA[<p>Algorithmic impact assessments have emerged as a centerpiece of the conversation about algorithmic governance. Impact assessments integrate many of the chief tools of algorithmic governance (e.g., auditing, end-to-end governance frameworks, ethics reviews) and speak to the challenges of algorithmic justice, equity, and community redress. Impact assessment (or similar accountability mechanisms) are at the core of recent headlines about procurement practices in Canada, leaked regulatory proposals in the EU, and new efforts to regulate the tech industry in the US. But do impact assessments promise too much? Multiple national and state governments have instituted, or are considering, requirements for impact assessment of algorithmic systems, but there is a surprisingly wide range of structures for these regulations.</p><p><a href="https://edri.org/our-work/welcoming-our-new-senior-policy-advisor-sarah-chander/"><strong>Sarah Chander</strong></a> leads EDRi's policy work on AI and non-discrimination with respect to digital rights. She is interested in building thoughtful, resilient movements and she looks to make links between the digital and other social justice movements. Sarah has experience in racial and social justice, previously she worked in advocacy at the European Network Against Racism (ENAR), on a wide range of topics including anti-discrimination law and policy, intersectional justice, state racism, racial profiling and police brutality. Comment end</p><p><a href="https://www.fenwickmckelvey.com/"><strong>Fenwick McKelvey</strong></a> is an Associate Professor in Information and Communication Technology Policy in the Department of Communication Studies at Concordia University. He studies digital politics and policy. He is the author of Internet Daemons: Digital Communications Possessed (University of Minnesota Press, 2018) winner of the 2019 Gertrude J. Robinson Book Award. He is co-author of The Permanent Campaign: New Media, New Politics (Peter Lang, 2012) with Greg Elmer and Ganaele Langlois.</p><p><a href="https://datasociety.net/people/metcalf-jacob/"><strong>Jacob (Jake) Metcalf,</strong></a> PhD, is a researcher at Data & Society, where he is a member of the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE). For this project, he studies how data ethics practices are emerging in environments that have not previously grappled with research ethics, such as industry, IRBs, and civil society organizations. His recent work has focused on the new organizational roles that have developed around AI ethics in tech companies.</p><p><a href="https://datasociety.net/people/smith-brittany/"><strong>Brittany Smith</strong></a> is the Policy Director at Data & Society. Prior to joining Data & Society, Brittany worked at DeepMind and Google in policy, ethics and human rights roles. Brittany also currently serves on the Advisory Board of JUST AI, a humanities-led network inviting new ways of thinking about data and AI ethics. Brittany graduated from Northwestern University and the London School of Economics.</p>
]]></description>
      <pubDate>Wed, 23 Jun 2021 00:00:00 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Algorithmic impact assessments have emerged as a centerpiece of the conversation about algorithmic governance. Impact assessments integrate many of the chief tools of algorithmic governance (e.g., auditing, end-to-end governance frameworks, ethics reviews) and speak to the challenges of algorithmic justice, equity, and community redress. Impact assessment (or similar accountability mechanisms) are at the core of recent headlines about procurement practices in Canada, leaked regulatory proposals in the EU, and new efforts to regulate the tech industry in the US. But do impact assessments promise too much? Multiple national and state governments have instituted, or are considering, requirements for impact assessment of algorithmic systems, but there is a surprisingly wide range of structures for these regulations.</p><p><a href="https://edri.org/our-work/welcoming-our-new-senior-policy-advisor-sarah-chander/"><strong>Sarah Chander</strong></a> leads EDRi's policy work on AI and non-discrimination with respect to digital rights. She is interested in building thoughtful, resilient movements and she looks to make links between the digital and other social justice movements. Sarah has experience in racial and social justice, previously she worked in advocacy at the European Network Against Racism (ENAR), on a wide range of topics including anti-discrimination law and policy, intersectional justice, state racism, racial profiling and police brutality. Comment end</p><p><a href="https://www.fenwickmckelvey.com/"><strong>Fenwick McKelvey</strong></a> is an Associate Professor in Information and Communication Technology Policy in the Department of Communication Studies at Concordia University. He studies digital politics and policy. He is the author of Internet Daemons: Digital Communications Possessed (University of Minnesota Press, 2018) winner of the 2019 Gertrude J. Robinson Book Award. He is co-author of The Permanent Campaign: New Media, New Politics (Peter Lang, 2012) with Greg Elmer and Ganaele Langlois.</p><p><a href="https://datasociety.net/people/metcalf-jacob/"><strong>Jacob (Jake) Metcalf,</strong></a> PhD, is a researcher at Data & Society, where he is a member of the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE). For this project, he studies how data ethics practices are emerging in environments that have not previously grappled with research ethics, such as industry, IRBs, and civil society organizations. His recent work has focused on the new organizational roles that have developed around AI ethics in tech companies.</p><p><a href="https://datasociety.net/people/smith-brittany/"><strong>Brittany Smith</strong></a> is the Policy Director at Data & Society. Prior to joining Data & Society, Brittany worked at DeepMind and Google in policy, ethics and human rights roles. Brittany also currently serves on the Advisory Board of JUST AI, a humanities-led network inviting new ways of thinking about data and AI ethics. Brittany graduated from Northwestern University and the London School of Economics.</p>
]]></content:encoded>
      <enclosure length="58068640" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/acec6f2b-3917-409b-b6c8-0c5b108c7db4/audio/eac7fc28-e7ed-458d-ad82-1cac9a39afe6/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Algorithmic Governance and the State of Impact Assessment  in the EU, US, and Canada</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>01:00:23</itunes:duration>
      <itunes:summary>Data &amp; Society AI on the Ground Program Director Jacob Metcalf hosts a conversation with experts on the emerging regulations in the EU, US, and Canada, and asks how impact assessment practices can effectively account for the harms caused by algorithmic systems.</itunes:summary>
      <itunes:subtitle>Data &amp; Society AI on the Ground Program Director Jacob Metcalf hosts a conversation with experts on the emerging regulations in the EU, US, and Canada, and asks how impact assessment practices can effectively account for the harms caused by algorithmic systems.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>88</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">bf2d5f7c-34be-4c1b-90d0-80fe86ea1896</guid>
      <title>Becoming Data Episode 5: Data &amp; Racial Capitalism</title>
      <description><![CDATA[<p>In the final episode of our season, “Becoming Data,” scholars Sareeta Amrute and Emiliano Treré join our host, Natalie Kerby, to discuss the concept and lived reality of racial capitalism. The episode explores how data-centric systems perpetuate racial capitalism, and how different communities, particularly in the Global South, have resisted this datafication.</p><p>Sareeta Amrute (@SareetaAmrute) is an anthropologist, associate professor at the University of Washington, and Director of Research at Data & Society.</p><p>Emiliano Treré (@EmilianoTrere) is a senior lecturer in Media Ecologies and Social Transformation and co-director of the Data Justice Lab at Cardiff University.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></description>
      <pubDate>Mon, 14 Jun 2021 05:00:00 +0000</pubDate>
      <author>events@datasociety.net (Sareeta Amrute, Natalie Kerby, Emiliano Treré)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In the final episode of our season, “Becoming Data,” scholars Sareeta Amrute and Emiliano Treré join our host, Natalie Kerby, to discuss the concept and lived reality of racial capitalism. The episode explores how data-centric systems perpetuate racial capitalism, and how different communities, particularly in the Global South, have resisted this datafication.</p><p>Sareeta Amrute (@SareetaAmrute) is an anthropologist, associate professor at the University of Washington, and Director of Research at Data & Society.</p><p>Emiliano Treré (@EmilianoTrere) is a senior lecturer in Media Ecologies and Social Transformation and co-director of the Data Justice Lab at Cardiff University.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></content:encoded>
      <enclosure length="45183233" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/89171610-1908-4897-b661-8d6f0258c37a/audio/74352c9f-76d4-4c48-a369-4d991745259a/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Becoming Data Episode 5: Data &amp; Racial Capitalism</itunes:title>
      <itunes:author>Sareeta Amrute, Natalie Kerby, Emiliano Treré</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/69fb10cf-9c18-4365-a009-2cc6dd11dd4c/3000x3000/public-books-logo-1200x855.jpg?aid=rss_feed"/>
      <itunes:duration>00:47:04</itunes:duration>
      <itunes:summary>Sareeta Amrute and Emiliano Treré discuss the relationship between data and racial capitalism.</itunes:summary>
      <itunes:subtitle>Sareeta Amrute and Emiliano Treré discuss the relationship between data and racial capitalism.</itunes:subtitle>
      <itunes:keywords>data, racial capitalism, global south, data colonialism, activism</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>85</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1bfdbe4a-ae81-49cc-88d4-f9e809277fdf</guid>
      <title>Becoming Data Episode 4: Data &amp; Infrastructure</title>
      <description><![CDATA[<p>Scholars Laura Forlano and Ranjit Singh join our host, Natalie Kerby, to explore the different infrastructures that data interacts with and flows through. Whose values get embedded into the algorithms that increasingly govern our lives? How are these data infrastructures complicating what it means to be human?</p><p>Ranjit Singh (@datasociety) is a Postdoctoral Scholar at Data & Society.</p><p>Laura Forlano (@laura4lano) is associate professor at the Institute of Design at Illinois Institute of Technology.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></description>
      <pubDate>Mon, 7 Jun 2021 05:00:00 +0000</pubDate>
      <author>events@datasociety.net (Ranjit Singh, Laura Forlano, Natalie Kerby)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Scholars Laura Forlano and Ranjit Singh join our host, Natalie Kerby, to explore the different infrastructures that data interacts with and flows through. Whose values get embedded into the algorithms that increasingly govern our lives? How are these data infrastructures complicating what it means to be human?</p><p>Ranjit Singh (@datasociety) is a Postdoctoral Scholar at Data & Society.</p><p>Laura Forlano (@laura4lano) is associate professor at the Institute of Design at Illinois Institute of Technology.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></content:encoded>
      <enclosure length="52145748" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/3692dade-d123-4ca0-9679-da7eee7a3307/audio/003c2db3-8cac-489e-b33d-db37e417f76e/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Becoming Data Episode 4: Data &amp; Infrastructure</itunes:title>
      <itunes:author>Ranjit Singh, Laura Forlano, Natalie Kerby</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/62886605-b0ed-448b-8935-eef6f9432dd6/3000x3000/public-books-logo-1200x855.jpg?aid=rss_feed"/>
      <itunes:duration>00:54:19</itunes:duration>
      <itunes:summary>Laura Forlano and Ranjit Singh discuss infrastructures that data interacts with and flows through.</itunes:summary>
      <itunes:subtitle>Laura Forlano and Ranjit Singh discuss infrastructures that data interacts with and flows through.</itunes:subtitle>
      <itunes:keywords>automation, data, digital ids, infrastructure</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>84</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">63146cce-b919-4584-b949-9d1736476521</guid>
      <title>For Leaders and Researchers:  Bringing Equity into the Remote Workplace</title>
      <description><![CDATA[<p>Over the past year, Covid-19 has exacerbated suffering across every section of life. Harassment and hostility, work pressure, and the strain of a pandemic, along with trauma from ongoing racism, sexism, and other discrimination, have taken a toll on our mental health. Until now, little research has been done on the impact of Covid-19 on the tech workforce. Project Include surveyed almost 3,000 people and interviewed dozens more about the shift to remote online workplaces. In their latest report, “Remote work since Covid-19 is exacerbating harm: What companies need to know and do”, they take an intersectional lens and data equity-focused approach to understand specifically who has been harmed, how they were harmed, and how to fix it.</p><p><strong>Speaker Bios</strong></p><p>One of Silicon Valley's leading advocates for fairness and ethics, <strong>Ellen Pao</strong> is also a long-time entrepreneur and tech investor. Her landmark gender discrimination case against venture capital firm Kleiner Perkins sparked other women, especially women of color, to fight harassment and discrimination in what’s been called the "Pao Effect.” Currently, Ellen is CEO of nonprofit Project Include, which uses data-based, practical solutions and recommendations to give everyone a fair chance to succeed. At reddit, she was the first major social platform CEO to ban revenge porn, unauthorized nude photos, and cross-platform harassment, with other social media sites quickly following suit. Ellen has written and spoken extensively, including in The New York Times, The Washington Post, TIME, Recode, and WIRED. Her book “Reset” recounts her long-time advocacy of diversity and inclusion and her own experiences with discrimination.</p><p><strong>Yang Hong</strong> works as Shoshin Insights, an in(ter)dependent data and machine learning consultancy with a justice-centered approach for societal issues. She’s worked on the global refugee crisis, hearing aids, data poverty, climate justice, tech equity, and more. As a community builder, she has tended to gardens such as Work On Climate, Activist Teahouse, and South Park Commons. Yang is a creative apprentice to lifelong practices of tea and collective liberation.</p><p><strong>McKensie Mack</strong> (pronouns: they/them) is a trilingual anti-oppression consultant, researcher, facilitator, and the founder & CEO of MMG. MMG is a global social justice organization that specializes in organizational change management through a lens of data equity; helping people transform culture, practices, and policies at the intersection of race, gender, class, disability, and LGBTQ+ identity. Their clients are currently based all over the world in the U.S., the UK, France, South Africa, Nigeria, Germany, Spain, and Peru. McKensie is the former inaugural Executive Director of Art+Feminism, one of the largest gender equity focused projects on Wikipedia.</p>
]]></description>
      <pubDate>Wed, 2 Jun 2021 18:04:29 +0000</pubDate>
      <author>events@datasociety.net (Yang Hong, Ellen Pao, McKensie Mack)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Over the past year, Covid-19 has exacerbated suffering across every section of life. Harassment and hostility, work pressure, and the strain of a pandemic, along with trauma from ongoing racism, sexism, and other discrimination, have taken a toll on our mental health. Until now, little research has been done on the impact of Covid-19 on the tech workforce. Project Include surveyed almost 3,000 people and interviewed dozens more about the shift to remote online workplaces. In their latest report, “Remote work since Covid-19 is exacerbating harm: What companies need to know and do”, they take an intersectional lens and data equity-focused approach to understand specifically who has been harmed, how they were harmed, and how to fix it.</p><p><strong>Speaker Bios</strong></p><p>One of Silicon Valley's leading advocates for fairness and ethics, <strong>Ellen Pao</strong> is also a long-time entrepreneur and tech investor. Her landmark gender discrimination case against venture capital firm Kleiner Perkins sparked other women, especially women of color, to fight harassment and discrimination in what’s been called the "Pao Effect.” Currently, Ellen is CEO of nonprofit Project Include, which uses data-based, practical solutions and recommendations to give everyone a fair chance to succeed. At reddit, she was the first major social platform CEO to ban revenge porn, unauthorized nude photos, and cross-platform harassment, with other social media sites quickly following suit. Ellen has written and spoken extensively, including in The New York Times, The Washington Post, TIME, Recode, and WIRED. Her book “Reset” recounts her long-time advocacy of diversity and inclusion and her own experiences with discrimination.</p><p><strong>Yang Hong</strong> works as Shoshin Insights, an in(ter)dependent data and machine learning consultancy with a justice-centered approach for societal issues. She’s worked on the global refugee crisis, hearing aids, data poverty, climate justice, tech equity, and more. As a community builder, she has tended to gardens such as Work On Climate, Activist Teahouse, and South Park Commons. Yang is a creative apprentice to lifelong practices of tea and collective liberation.</p><p><strong>McKensie Mack</strong> (pronouns: they/them) is a trilingual anti-oppression consultant, researcher, facilitator, and the founder & CEO of MMG. MMG is a global social justice organization that specializes in organizational change management through a lens of data equity; helping people transform culture, practices, and policies at the intersection of race, gender, class, disability, and LGBTQ+ identity. Their clients are currently based all over the world in the U.S., the UK, France, South Africa, Nigeria, Germany, Spain, and Peru. McKensie is the former inaugural Executive Director of Art+Feminism, one of the largest gender equity focused projects on Wikipedia.</p>
]]></content:encoded>
      <enclosure length="56370698" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/9a38c4e3-5f11-4ce1-8a50-be0f3c9a4b69/audio/3f24218f-a085-49ec-8329-e1e48c9d9446/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>For Leaders and Researchers:  Bringing Equity into the Remote Workplace</itunes:title>
      <itunes:author>Yang Hong, Ellen Pao, McKensie Mack</itunes:author>
      <itunes:duration>00:58:37</itunes:duration>
      <itunes:summary>Ellen Pao hosts a conversation with Yang Hong and McKensie Mac about their new report on remote workplace harms.</itunes:summary>
      <itunes:subtitle>Ellen Pao hosts a conversation with Yang Hong and McKensie Mac about their new report on remote workplace harms.</itunes:subtitle>
      <itunes:keywords>remote work, tech, diversity, inclusion</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>87</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">065deb61-f1fc-45d5-a1ac-7219a07c091f</guid>
      <title>Becoming Data Episode 3: Data, AI &amp; Automation</title>
      <description><![CDATA[<p>Researchers Arthur Gwagwa and Deb Raji join our host, Natalie Kerby, to discuss data, AI, and automation, and the different ways they operate across geopolitical contexts such as the US and Africa. The episode covers not only the harms that can result from these systems, but also how we might address and prevent those harms.</p><p>Arthur Gwagwa (@arthurgwagwa) is a researcher at Utrecht University’s Ethics Institute in the Department of Philosophy.</p><p>Deb Raji (@rajiinio) is a fellow at Mozilla and works closely with the Algorithmic Justice League.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></description>
      <pubDate>Tue, 1 Jun 2021 13:44:25 +0000</pubDate>
      <author>events@datasociety.net (Arthur Gwagwa, Deb Raji, Natalie Kerby)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Researchers Arthur Gwagwa and Deb Raji join our host, Natalie Kerby, to discuss data, AI, and automation, and the different ways they operate across geopolitical contexts such as the US and Africa. The episode covers not only the harms that can result from these systems, but also how we might address and prevent those harms.</p><p>Arthur Gwagwa (@arthurgwagwa) is a researcher at Utrecht University’s Ethics Institute in the Department of Philosophy.</p><p>Deb Raji (@rajiinio) is a fellow at Mozilla and works closely with the Algorithmic Justice League.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></content:encoded>
      <enclosure length="40109603" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/d7edbfcf-5a4e-4779-b288-9577ef03da7c/audio/c368e3b8-2f57-4c39-bdb9-926acea31939/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Becoming Data Episode 3: Data, AI &amp; Automation</itunes:title>
      <itunes:author>Arthur Gwagwa, Deb Raji, Natalie Kerby</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/b6262244-e7ce-4388-9b46-ad2669154be0/3000x3000/public-books-logo-1200x855.jpg?aid=rss_feed"/>
      <itunes:duration>00:41:47</itunes:duration>
      <itunes:summary>Deb Raji and Arthur Gwagwa discuss AI and automation across different geopolitical contexts.</itunes:summary>
      <itunes:subtitle>Deb Raji and Arthur Gwagwa discuss AI and automation across different geopolitical contexts.</itunes:subtitle>
      <itunes:keywords>automation, data, ai, algorithmic harm</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>83</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">d3d6786f-12b5-4061-a6e5-07d8f77be015</guid>
      <title>Unseen Teens: The Challenges of Building Healthy Tech For Young People</title>
      <description><![CDATA[<p>At social media and gaming companies, the user is the constant focus — at least in theory. How to get them to use this platform more? To stay longer? To come back tomorrow? Attention and resources are poured into answering these questions throughout the industry. That same attention and those same resources are not, however, put toward the well-being of a major group of their users: young people. </p><p>Data & Society’s new report, <i>The Unseen Teen: The Challenges of Building Healthy Tech for Young People</i> investigates how social platform companies think about and design for young people and their health and digital well-being. Based on a multi-year, qualitative research project interviewing social media and social gaming company workers in a variety of roles, we learned that many companies struggle to imagine and design for the breadth of their users, especially minors. Instead by focusing on averages, and limited quantitative metrics, tech companies miss the nuance in differential impacts that has real consequences for real people. We discuss the report, our recommendations, and the real world implications of our findings on those entrusted to help companies think about the well-being of their youngest users.</p><p><strong>Speaker Bios</strong></p><p><strong>Amanda Lenhart</strong> studies how technology affects human lives, with a special focus on families and children. A quantitative and qualitative researcher, Amanda is the Health + Data Research Lead at the Data & Society Research Institute. Over decades, she has examined how adolescents and their families use and think about technology, how young adults consume news, how harassment has thrived in online spaces, and how automation will impact workers. Most recently, as deputy director of the Better Life Lab at New America, Amanda focused on the ways technology affects workers’ jobs and lives, as well as the family-supportive policies that enable balance between the personal and the professional. She began her career at the Pew Research Center, studying how teens and families use social and mobile technologies.</p><p><strong>Charlotte Willner</strong> is the Founding Executive Director of the Trust & Safety Professional Association (TSPA) and the Trust & Safety Foundation after fourteen years of working in trust and safety operations. She began her career at Facebook, where she led international user support, then built out their first safety operations team. She went on to build and lead Pinterest's trust and safety operations team, overseeing online safety, law enforcement response, and intellectual property matters. She holds a degree in English from Bowdoin College and is delighted to show you that this is, in fact, what you can do with an English degree.</p><p><strong>Aden Van Noppen</strong> is the Founder and Executive Director of Mobius, an unconventional collective of technologists, scientists, activists, and spiritual teachers working together to create a world in which technology brings out the best in humanity. She was a Senior Advisor to the U.S. Chief Technology Officer in the Obama White House Office, where she developed the led programs that leverage tech as a tool for social and economic justice. After that, she spent a year as a Resident Fellow at Harvard Divinity School focusing on the intersection of tech, ethics and spirituality and was an affiliate at Harvard’s Berkman Klein Center for Internet and Society. Aden was also part of the founding leadership team of The Sanctuaries, the first interfaith arts community in the country. Her work has been featured in the New Yorker, The New York Times, WIRED, and elsewhere.</p>
]]></description>
      <pubDate>Tue, 1 Jun 2021 13:43:38 +0000</pubDate>
      <author>events@datasociety.net (Charlotte Willner, Aden Van Noppen, Amanda Lenhart)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>At social media and gaming companies, the user is the constant focus — at least in theory. How to get them to use this platform more? To stay longer? To come back tomorrow? Attention and resources are poured into answering these questions throughout the industry. That same attention and those same resources are not, however, put toward the well-being of a major group of their users: young people. </p><p>Data & Society’s new report, <i>The Unseen Teen: The Challenges of Building Healthy Tech for Young People</i> investigates how social platform companies think about and design for young people and their health and digital well-being. Based on a multi-year, qualitative research project interviewing social media and social gaming company workers in a variety of roles, we learned that many companies struggle to imagine and design for the breadth of their users, especially minors. Instead by focusing on averages, and limited quantitative metrics, tech companies miss the nuance in differential impacts that has real consequences for real people. We discuss the report, our recommendations, and the real world implications of our findings on those entrusted to help companies think about the well-being of their youngest users.</p><p><strong>Speaker Bios</strong></p><p><strong>Amanda Lenhart</strong> studies how technology affects human lives, with a special focus on families and children. A quantitative and qualitative researcher, Amanda is the Health + Data Research Lead at the Data & Society Research Institute. Over decades, she has examined how adolescents and their families use and think about technology, how young adults consume news, how harassment has thrived in online spaces, and how automation will impact workers. Most recently, as deputy director of the Better Life Lab at New America, Amanda focused on the ways technology affects workers’ jobs and lives, as well as the family-supportive policies that enable balance between the personal and the professional. She began her career at the Pew Research Center, studying how teens and families use social and mobile technologies.</p><p><strong>Charlotte Willner</strong> is the Founding Executive Director of the Trust & Safety Professional Association (TSPA) and the Trust & Safety Foundation after fourteen years of working in trust and safety operations. She began her career at Facebook, where she led international user support, then built out their first safety operations team. She went on to build and lead Pinterest's trust and safety operations team, overseeing online safety, law enforcement response, and intellectual property matters. She holds a degree in English from Bowdoin College and is delighted to show you that this is, in fact, what you can do with an English degree.</p><p><strong>Aden Van Noppen</strong> is the Founder and Executive Director of Mobius, an unconventional collective of technologists, scientists, activists, and spiritual teachers working together to create a world in which technology brings out the best in humanity. She was a Senior Advisor to the U.S. Chief Technology Officer in the Obama White House Office, where she developed the led programs that leverage tech as a tool for social and economic justice. After that, she spent a year as a Resident Fellow at Harvard Divinity School focusing on the intersection of tech, ethics and spirituality and was an affiliate at Harvard’s Berkman Klein Center for Internet and Society. Aden was also part of the founding leadership team of The Sanctuaries, the first interfaith arts community in the country. Her work has been featured in the New Yorker, The New York Times, WIRED, and elsewhere.</p>
]]></content:encoded>
      <enclosure length="69980578" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/4367fa0d-4bd7-4586-adfb-f6a705e14725/audio/b1f8572c-0e2f-4367-a76e-47de3cacfd81/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Unseen Teens: The Challenges of Building Healthy Tech For Young People</itunes:title>
      <itunes:author>Charlotte Willner, Aden Van Noppen, Amanda Lenhart</itunes:author>
      <itunes:duration>01:12:48</itunes:duration>
      <itunes:summary>Amanda Lenhart hosts Aden Van Noppen and Charlotte Willner for a conversation on designing healthy technologies for young people.</itunes:summary>
      <itunes:subtitle>Amanda Lenhart hosts Aden Van Noppen and Charlotte Willner for a conversation on designing healthy technologies for young people.</itunes:subtitle>
      <itunes:keywords>digital wellbeing, teens, tech companies, healthy tech</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>86</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">925e6959-1b63-4bc9-b612-eb5bfa905774</guid>
      <title>Becoming Data Episode 2: Data &amp; Labor</title>
      <description><![CDATA[<p>Scholar Shaka McGlotten and activist Chris Ramsaroop join our host, Natalie Kerby, to discuss data in the context of labor. The episode addresses the historical ways that data has been used to organize labor, the labor of making ourselves visible to data-centric systems, and the different ways that people, and more specifically workers, are resisting datafication.</p><p>Shaka McGlotten (@shakaz23) is a professor of anthropology and media studies at Purchase College, SUNY and 2020-2021 Faculty Fellow at Data & Society.</p><p>Chris Ramsaroop (@j4mw) is an organizer with Justice for Migrant Workers and a PhD candidate at the University of Toronto.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></description>
      <pubDate>Mon, 24 May 2021 13:50:02 +0000</pubDate>
      <author>events@datasociety.net (Chris Ramsaroop, Shaka McGlotten, Natalie Kerby)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Scholar Shaka McGlotten and activist Chris Ramsaroop join our host, Natalie Kerby, to discuss data in the context of labor. The episode addresses the historical ways that data has been used to organize labor, the labor of making ourselves visible to data-centric systems, and the different ways that people, and more specifically workers, are resisting datafication.</p><p>Shaka McGlotten (@shakaz23) is a professor of anthropology and media studies at Purchase College, SUNY and 2020-2021 Faculty Fellow at Data & Society.</p><p>Chris Ramsaroop (@j4mw) is an organizer with Justice for Migrant Workers and a PhD candidate at the University of Toronto.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></content:encoded>
      <enclosure length="58449683" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/99b92fdd-e1b5-4d30-95e2-d242e7f8d7f5/audio/9e615765-be3a-4159-8b58-502e26ce846c/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Becoming Data Episode 2: Data &amp; Labor</itunes:title>
      <itunes:author>Chris Ramsaroop, Shaka McGlotten, Natalie Kerby</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/7440d3e4-692a-4702-8e7c-3e7e49d9e613/3000x3000/public-books-logo-1200x855.jpg?aid=rss_feed"/>
      <itunes:duration>01:00:53</itunes:duration>
      <itunes:summary>Shaka McGlotten and Chris Ramsaroop discuss the history of human labor being quantified as data. </itunes:summary>
      <itunes:subtitle>Shaka McGlotten and Chris Ramsaroop discuss the history of human labor being quantified as data. </itunes:subtitle>
      <itunes:keywords>automation, data, labor, farm workers</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>82</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">feb2fdac-3901-42c4-8038-a699186715b0</guid>
      <title>Becoming Data Episode 1: Data &amp; Humanity</title>
      <description><![CDATA[<p>In the first episode of our new season, “Becoming Data,” artist Mimi Onuoha and data journalist Lam Thuy Vo join host, Natalie Kerby, to consider what is lost when human life becomes translated into data. How do people show up in data, and what are some of the inequalities that can result from data collection?</p><p>Mimi Onuoha (@thistimeitsmimi) is a media artist who makes work about what it means for the world to take the form of data.</p><p>Lam Thuy Vo (@lamthuyvo) is a reporter who digs into data to examine how systems and policies affect individuals. She is an incoming Data Journalist-in-Residence at the Craig Newmark School of Journalism.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></description>
      <pubDate>Mon, 17 May 2021 13:46:39 +0000</pubDate>
      <author>events@datasociety.net (Lam Thuy Vo, Mimi Onuoha, Natalie Kerby)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In the first episode of our new season, “Becoming Data,” artist Mimi Onuoha and data journalist Lam Thuy Vo join host, Natalie Kerby, to consider what is lost when human life becomes translated into data. How do people show up in data, and what are some of the inequalities that can result from data collection?</p><p>Mimi Onuoha (@thistimeitsmimi) is a media artist who makes work about what it means for the world to take the form of data.</p><p>Lam Thuy Vo (@lamthuyvo) is a reporter who digs into data to examine how systems and policies affect individuals. She is an incoming Data Journalist-in-Residence at the Craig Newmark School of Journalism.</p><p>"Becoming Data" is co-produced by Data & Society and Public Books.</p>
]]></content:encoded>
      <enclosure length="55068383" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/f58e4db2-b95b-4db1-ad00-8a0e305602e9/audio/0f947d6d-15a5-45c4-ba98-97122420f375/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Becoming Data Episode 1: Data &amp; Humanity</itunes:title>
      <itunes:author>Lam Thuy Vo, Mimi Onuoha, Natalie Kerby</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/b837e9df-01f5-48e4-be8d-40db14af8d04/3000x3000/public-books-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:57:22</itunes:duration>
      <itunes:summary>Artist Mimi Onuoha and data journalist Lam Thuy Vo discuss data collection practices and their consequences with host Natalie Kerby.</itunes:summary>
      <itunes:subtitle>Artist Mimi Onuoha and data journalist Lam Thuy Vo discuss data collection practices and their consequences with host Natalie Kerby.</itunes:subtitle>
      <itunes:keywords>data, algorithmic violence, power, participatory data</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>81</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f91df0cd-e953-440e-a5ad-49cd9c0809a0</guid>
      <title>Becoming Data: Trailer</title>
      <description><![CDATA[<p>In the podcast series, Natalie Kerby of Data & Society asks her guests: How long has human life been quantified as data, and in what contexts? What are some major implications of humanity being measured as data? How are people pushing back against the datafication of human life, work, health, and citizenship?</p><p>She speaks with academics, artists, activists, and journalists to explore these questions and more. </p>
]]></description>
      <pubDate>Mon, 10 May 2021 13:49:42 +0000</pubDate>
      <author>events@datasociety.net (Mimi Onuoha, Sareeta Amrute, Lam Thuy Vo, Shaka McGlotten, Arthur Gwagwa, Chris Ramsaroop, Ranjit Singh, Emiliano Treré, Deb Raji, Laura Forlano)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In the podcast series, Natalie Kerby of Data & Society asks her guests: How long has human life been quantified as data, and in what contexts? What are some major implications of humanity being measured as data? How are people pushing back against the datafication of human life, work, health, and citizenship?</p><p>She speaks with academics, artists, activists, and journalists to explore these questions and more. </p>
]]></content:encoded>
      <enclosure length="1736664" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/89928fc8-b272-417b-b7f6-f2ed980383c8/audio/6e0d3e54-9a42-45c9-b850-199d3c3b089f/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Becoming Data: Trailer</itunes:title>
      <itunes:author>Mimi Onuoha, Sareeta Amrute, Lam Thuy Vo, Shaka McGlotten, Arthur Gwagwa, Chris Ramsaroop, Ranjit Singh, Emiliano Treré, Deb Raji, Laura Forlano</itunes:author>
      <itunes:image href="https://image.simplecastcdn.com/images/9856635c-0e65-421d-898d-9d7fd229b4fd/c28de1a7-57c2-4ae4-82d4-f2c38c383cfd/3000x3000/public-books-logo.jpg?aid=rss_feed"/>
      <itunes:duration>00:01:49</itunes:duration>
      <itunes:summary>“Becoming Data” is the third season of of Public Books 101 co-produced by Public Books and Data &amp; Society. </itunes:summary>
      <itunes:subtitle>“Becoming Data” is the third season of of Public Books 101 co-produced by Public Books and Data &amp; Society. </itunes:subtitle>
      <itunes:keywords>data, society, datafication</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>80</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9a7bb1bc-ac06-43ea-9054-f7b5198c51e6</guid>
      <title>Vaccine Passports with Ada Lovelace Institute</title>
      <description><![CDATA[<p>To facilitate a global understanding of possible vulnerabilities that will arise from vaccine passport adoption, we bring together <strong>Ranjit Singh</strong>, expert on digital identity systems, <strong>Amy Fairchild</strong>, public health ethicist, and <strong>Imogen Parker</strong>, head of policy at Ada Lovelace Institute, to discuss the past and future of digital health systems. The conversation is hosted by Data & Society Health and Data Program Director <strong>Amanda Lenhart</strong>.</p><p><i>Speaker Bios</i></p><p><strong>Amanda Lenhart (Host)</strong><br />Amanda Lenhart studies how technology affects human lives, with a special focus on families and children. A quantitative and qualitative researcher, Amanda is the Health + Data Research Lead at the Data & Society Research Institute. Over decades, she has examined how adolescents and their families use and think about technology, how young adults consume news, how harassment has thrived in online spaces, and how automation will impact workers. Most recently, as deputy director of the Better Life Lab at New America, Amanda focused on the ways technology affects workers’ jobs and lives, as well as the family-supportive policies that enable balance between the personal and the professional. She began her career at the Pew Research Center, studying how teens and families use social and mobile technologies.</p><p>Amanda specializes in translating rigorous research for a broad national audience. Dedicated to public communication, she has testified before congressional subcommittees and the Federal Trade Commission. Amanda’s work has been featured in numerous national publications and broadcasts, including the PBS Newshour and NPR’s All Things Considered.</p><p><strong>Imogen Parker (Co-Host)</strong><br />Imogen is Head of Policy at the Ada Lovelace Institute, where she is responsible for creating social change through developments to policy, law, regulation and public service delivery. She is a Policy Fellow at Cambridge University’s Centre for Science and Policy.</p><p>Her career has been at the intersection of social justice, technology and research. In her previous role as Head of the Nuffield Foundation’s programmes on Justice, Rights and Digital Society she worked in collaboration with the founding partner organisations to create the Institute. Prior to that she was acting Head of Policy Research for Citizens and Democracy at Citizens Advice, Research Fellow at the Institute for Public Policy Research (IPPR) and worked with Baroness Kidron to create the children’s digital rights charity 5Rights.</p><p><strong>Ranjit Singh (Panelist)</strong><br />Ranjit Singh has a doctorate in Science and Technology Studies (STS) from Cornell University. His research lies at the intersection of data infrastructures, global development, and public policy. He uses methods of interview-based qualitative sociology and multi-sited ethnography in his research. He examines the everyday experiences of people subject to data-driven practices and follows the mutual shaping of their lives and their data records. His dissertation research on Aadhaar, the national biometrics-based identification infrastructure of India, advances the public understanding of the affordances and limits of biometrics-based data infrastructures in practically achieving inclusive development and reshaping the nature of Indian citizenship.</p><p>He has published his research in venues such as the Journal of South Asian Studies and the ACM CHI Conference; he has presented his work at conferences including CSCW, 4S, AAA, and ECSAS. Beyond the dissertation, he has focused on two additional infrastructures: (1) the National Register of Citizens in Assam, India—an effort to differentiate citizens from illegal immigrants. (2) US Credit Scoring—the efforts of low-income individuals to improve their creditworthiness within the lending industry. In all these projects, his research is oriented towards understanding how data is increasingly used to imagine and develop new digital solutions for democratizing inclusion. He was also involved in developing the Digital Due Process Clinic, a clinical program at Cornell University, to study and support individuals in their struggles to secure fair representation in data infrastructures.</p><p><strong>Amy Lauren Fairchild (Panelist)</strong><br />Amy Lauren Fairchild is a historian who works at the intersection of history, public health ethics, and public health policy and politics. Her work helped establish public health ethics—which is concerned with the well-being of populations—as fundamentally distinct from either bioethics or human rights. Whether exploring the tension between privacy and surveillance, immigration and border control, or paternalism and liberty, Fairchild assesses the social, political, and ethical factors that shape not only the potential and limits of the state to intervene for the common good but also what counts as evidence.</p><p>Fairchild has written two books: <i>Science at the Borders: Immigrant Medical Inspection and the Shaping of the Modern Industrial Labor Force</i> and <i>Searching Eyes: Privacy, the State, and Disease Surveillance in America</i> (with Ronald Bayer and James Colgrove). In addition, she has published in leading journals including the New England Journal of Medicine, Health Affairs, the American Journal of Public Health, Science, and the JAMA. The National Endowment for the Humanities funds her current book project: a social history of fear and panic.</p><p>A graduate of the Plan II Honors Program at the University of Texas at Austin, Fairchild received her MPH and PhD from Columbia University. She was on the faculty at Columbia for 22 years in the Department of Sociomedical Sciences at the Mailman School of Public Health. At Columbia, she served as Assistant Director for Academic Affairs in the Center for the History and Ethics of Public Health, Chair of the Department of Sociomedical Sciences, and Director of the Foundations module and Integration of Science and Practice in the MPH Core Curriculum. She continues to serve as Co-Director, with Ronald Bayer, of the World Health Organization Collaborating Center for Bioethics at Columbia’s Center for the History and Ethics of Public Health. Fairchild also served on the faculty at Texas A&M University. There, she was Associate Dean for Academic Affairs at the School of Public Health and Associate Vice President for Faculty and Academic Affairs at the Health Science Center.</p><p>Fairchild feels extraordinarily honored to serve as Dean of the College of Public Health at The Ohio State University. Any university with the chutzpah to have a poison nut for a mascot is the kind of place she wants to stay.</p>
]]></description>
      <pubDate>Wed, 24 Mar 2021 20:18:26 +0000</pubDate>
      <author>events@datasociety.net (Amy Fairchild, Imogen Parker, Ranjit Singh, Amanda Lenhart)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>To facilitate a global understanding of possible vulnerabilities that will arise from vaccine passport adoption, we bring together <strong>Ranjit Singh</strong>, expert on digital identity systems, <strong>Amy Fairchild</strong>, public health ethicist, and <strong>Imogen Parker</strong>, head of policy at Ada Lovelace Institute, to discuss the past and future of digital health systems. The conversation is hosted by Data & Society Health and Data Program Director <strong>Amanda Lenhart</strong>.</p><p><i>Speaker Bios</i></p><p><strong>Amanda Lenhart (Host)</strong><br />Amanda Lenhart studies how technology affects human lives, with a special focus on families and children. A quantitative and qualitative researcher, Amanda is the Health + Data Research Lead at the Data & Society Research Institute. Over decades, she has examined how adolescents and their families use and think about technology, how young adults consume news, how harassment has thrived in online spaces, and how automation will impact workers. Most recently, as deputy director of the Better Life Lab at New America, Amanda focused on the ways technology affects workers’ jobs and lives, as well as the family-supportive policies that enable balance between the personal and the professional. She began her career at the Pew Research Center, studying how teens and families use social and mobile technologies.</p><p>Amanda specializes in translating rigorous research for a broad national audience. Dedicated to public communication, she has testified before congressional subcommittees and the Federal Trade Commission. Amanda’s work has been featured in numerous national publications and broadcasts, including the PBS Newshour and NPR’s All Things Considered.</p><p><strong>Imogen Parker (Co-Host)</strong><br />Imogen is Head of Policy at the Ada Lovelace Institute, where she is responsible for creating social change through developments to policy, law, regulation and public service delivery. She is a Policy Fellow at Cambridge University’s Centre for Science and Policy.</p><p>Her career has been at the intersection of social justice, technology and research. In her previous role as Head of the Nuffield Foundation’s programmes on Justice, Rights and Digital Society she worked in collaboration with the founding partner organisations to create the Institute. Prior to that she was acting Head of Policy Research for Citizens and Democracy at Citizens Advice, Research Fellow at the Institute for Public Policy Research (IPPR) and worked with Baroness Kidron to create the children’s digital rights charity 5Rights.</p><p><strong>Ranjit Singh (Panelist)</strong><br />Ranjit Singh has a doctorate in Science and Technology Studies (STS) from Cornell University. His research lies at the intersection of data infrastructures, global development, and public policy. He uses methods of interview-based qualitative sociology and multi-sited ethnography in his research. He examines the everyday experiences of people subject to data-driven practices and follows the mutual shaping of their lives and their data records. His dissertation research on Aadhaar, the national biometrics-based identification infrastructure of India, advances the public understanding of the affordances and limits of biometrics-based data infrastructures in practically achieving inclusive development and reshaping the nature of Indian citizenship.</p><p>He has published his research in venues such as the Journal of South Asian Studies and the ACM CHI Conference; he has presented his work at conferences including CSCW, 4S, AAA, and ECSAS. Beyond the dissertation, he has focused on two additional infrastructures: (1) the National Register of Citizens in Assam, India—an effort to differentiate citizens from illegal immigrants. (2) US Credit Scoring—the efforts of low-income individuals to improve their creditworthiness within the lending industry. In all these projects, his research is oriented towards understanding how data is increasingly used to imagine and develop new digital solutions for democratizing inclusion. He was also involved in developing the Digital Due Process Clinic, a clinical program at Cornell University, to study and support individuals in their struggles to secure fair representation in data infrastructures.</p><p><strong>Amy Lauren Fairchild (Panelist)</strong><br />Amy Lauren Fairchild is a historian who works at the intersection of history, public health ethics, and public health policy and politics. Her work helped establish public health ethics—which is concerned with the well-being of populations—as fundamentally distinct from either bioethics or human rights. Whether exploring the tension between privacy and surveillance, immigration and border control, or paternalism and liberty, Fairchild assesses the social, political, and ethical factors that shape not only the potential and limits of the state to intervene for the common good but also what counts as evidence.</p><p>Fairchild has written two books: <i>Science at the Borders: Immigrant Medical Inspection and the Shaping of the Modern Industrial Labor Force</i> and <i>Searching Eyes: Privacy, the State, and Disease Surveillance in America</i> (with Ronald Bayer and James Colgrove). In addition, she has published in leading journals including the New England Journal of Medicine, Health Affairs, the American Journal of Public Health, Science, and the JAMA. The National Endowment for the Humanities funds her current book project: a social history of fear and panic.</p><p>A graduate of the Plan II Honors Program at the University of Texas at Austin, Fairchild received her MPH and PhD from Columbia University. She was on the faculty at Columbia for 22 years in the Department of Sociomedical Sciences at the Mailman School of Public Health. At Columbia, she served as Assistant Director for Academic Affairs in the Center for the History and Ethics of Public Health, Chair of the Department of Sociomedical Sciences, and Director of the Foundations module and Integration of Science and Practice in the MPH Core Curriculum. She continues to serve as Co-Director, with Ronald Bayer, of the World Health Organization Collaborating Center for Bioethics at Columbia’s Center for the History and Ethics of Public Health. Fairchild also served on the faculty at Texas A&M University. There, she was Associate Dean for Academic Affairs at the School of Public Health and Associate Vice President for Faculty and Academic Affairs at the Health Science Center.</p><p>Fairchild feels extraordinarily honored to serve as Dean of the College of Public Health at The Ohio State University. Any university with the chutzpah to have a poison nut for a mascot is the kind of place she wants to stay.</p>
]]></content:encoded>
      <enclosure length="57281768" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/4be7296e-2f5a-432e-b193-3a62f0b9d053/audio/af50b80b-a4d9-4277-98d1-0775b62b2e88/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Vaccine Passports with Ada Lovelace Institute</itunes:title>
      <itunes:author>Amy Fairchild, Imogen Parker, Ranjit Singh, Amanda Lenhart</itunes:author>
      <itunes:duration>00:59:41</itunes:duration>
      <itunes:summary>Data &amp; Society and the Ada Lovelace Institute co-host a discussion about the next frontier of COVID-19 roll-outs: vaccine passports.</itunes:summary>
      <itunes:subtitle>Data &amp; Society and the Ada Lovelace Institute co-host a discussion about the next frontier of COVID-19 roll-outs: vaccine passports.</itunes:subtitle>
      <itunes:keywords>vaccine passport, digital health systems, vaccine, covid-19</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>79</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">dbcc5494-4936-4d21-9f31-46f031361a40</guid>
      <title>Digital Technology and Democratic Theory</title>
      <description><![CDATA[<p><strong>Data & Society</strong> and <a href="https://pacscenter.stanford.edu/#"><strong>Stanford PACS</strong></a> host a special book launch: One of the most far-reaching transformations in our era is the wave of digital technologies rolling over—and upending—nearly every aspect of life. Work and leisure, family and friendship, community and citizenship have all been modified by now-ubiquitous digital tools and platforms. <a href="https://bookshop.org/a/14284/9780226748573"><i>Digital Technology and Democratic Theory</i></a> looks closely at one significant facet of our rapidly evolving digital lives: <strong>how technology is radically changing our lives as citizens and participants in democratic governments</strong>.</p><p>To understand these transformations, this book brings together contributions by scholars from multiple disciplines to wrestle with the question of how digital technologies shape, reshape, and affect fundamental questions about democracy and democratic theory. As expectations have whiplashed—from Twitter optimism in the wake of the Arab Spring to Facebook pessimism in the wake of the 2016 US election—the time is ripe for a more sober and long-term assessment. How should we take stock of digital technologies and their promise and peril for reshaping democratic societies and institutions? To answer, this volume broaches the most pressing technological changes and issues facing democracy as a philosophy and an institution.</p><p><i>Speaker Bios</i></p><p><a href="https://datasociety.net/people/caplan-robyn/"><strong>Robyn Caplan</strong></a><strong> | @robyncaplan</strong><br />Robyn Caplan is a Researcher at Data & Society, and a PhD Candidate at Rutgers University (ABD, advisor Philip M. Napoli) in the School of Communication and Information Studies. She conducts research on issues related to platform governance and content standards. Her most recent work investigates the extent to which organizational dynamics at major platform companies impacts the development and enforcement of policy geared towards limiting disinformation and hate speech, and the impact of regulation, industry coordination, and advocacy can play in changing platform policies.</p><p>Her work has been published in journals such as <i>First Monday</i>, <i>Big Data & Society</i>, and <i>Feminist Media Studies</i>. She has had editorials featured in <i>The New York Times</i>, and her work has been featured by <i>NBC News THINK</i> and <i>Al Jazeera</i>. She has conducted research on a variety of issues regarding data-centric technological development in society, including government data policies, media manipulation, and the use of data in policing.</p><p><a href="https://pacscenter.stanford.edu/person/lucy-bernholz/"><strong>Lucy Bernholz</strong></a><strong> | @p2173</strong><br />Lucy Bernholz is a Senior Research Scholar at Stanford University’s Center on Philanthropy and Civil Society and Director of the Digital Civil Society Lab. She has been a Visiting Scholar at The David and Lucile Packard Foundation, and a Fellow at the Rockefeller Foundation’s Bellagio Center, the Hybrid Reality Institute, and the New America Foundation. She is the author of numerous articles and books, including the annual <i>Blueprint Series on Philanthropy and the Social Economy</i>, the 2010 publication <i>Disrupting Philanthropy</i>, and her 2004 book <i>Creating Philanthropic Capital Markets: The Deliberate Evolution</i>. She is a co-editor of <i>Philanthropy in Democratic Societies</i> (2016, Chicago University Press) and of the forthcoming volume <i>Digital Technology and Democratic Theory</i>. She writes extensively on philanthropy, technology, and policy on her award winning blog, philanthropy2173.com.</p><p>She studied history and has a B.A. from Yale University, where she played field hockey and captained the lacrosse team, and an M.A. and Ph.D. from Stanford University.</p><p><a href="https://pacscenter.stanford.edu/person/rob-reich/"><strong>Rob Reich</strong></a><strong> | @robreich</strong><br />Rob Reich is professor of political science and, by courtesy, professor of philosophy at the Graduate School of Education, at Stanford University. He is the director of the Center for Ethics in Society and co-director of the Center on Philanthropy and Civil Society (publisher of the <i>Stanford Social Innovation Review</i>), both at Stanford University. He is the author most recently of <i>Just Giving: Why Philanthropy is Failing Democracy and How It Can Do Better</i> (Princeton University Press, 2018) and <i>Philanthropy in Democratic Societies: History, Institutions, Values</i> (edited with Chiara Cordelli and Lucy Bernholz, University of Chicago Press, 2016). He is also the author of several books on education: <i>Bridging Liberalism and Multiculturalism in American Education</i> (University of Chicago Press, 2002) and <i>Education, Justice, and Democracy</i> (edited with Danielle Allen, University of Chicago Press, 2013). His current work focuses on ethics, public policy, and technology, and he serves as associate director of the Human-Centered Artificial Intelligence initiative at Stanford. Rob is the recipient of multiple teaching awards, including the Walter J. Gores award, Stanford’s highest honor for teaching. Reich was a sixth grade teacher at Rusk Elementary School in Houston, Texas before attending graduate school. He is a board member of the magazine Boston Review, of Giving Tuesday, and at the Spencer Foundation. More details at his personal webpage: http://robreich.stanford.edu</p><p><a href="https://www.lse.ac.uk/media-and-communications/people/academic-staff/seeta-pena-gangadharan"><strong>Seeta Peña Gangadharan</strong></a><br />Dr Seeta Peña Gangadharan is Associate Professor in the Department of Media and Communications at the London School of Economics and Political Science. Her work focuses on inclusion, exclusion, and marginalization, as well as questions around democracy, social justice, and technological governance. She currently co-leads two projects: Our Data Bodies, which examines the impact of data collection and data-driven technologies on members of marginalized communities in the United States, and Justice, Equity, and Technology, which explores the impacts of data-driven technologies and infrastructures on European civil society. She is also a visiting scholar in the School of Media Studies at The New School, Affiliated Fellow of Yale Law School’s Information Society Project, and Affiliate Fellow of Data & Society Research Institute.</p><p>Before joining the Department in 2015, Seeta was Senior Research Fellow at New America’s Open Technology Institute, addressing policies and practices related to digital inclusion, privacy, and “big data.” Before OTI, she was a Postdoctoral Associate in Law and MacArthur Fellow at Yale Law School’s Information Society Project. She received her PhD from Stanford University and holds an MSc from the Department of Media and Communications at the London School of Economics and Political Science.</p><p>Seeta’s research has been supported by grants from Digital Trust Foundation, Institute of Museum and Library Services, Ford Foundation, Open Society Foundations, Stanford University’s Center on Philanthropy and Civil Society, and U.S. Department of Commerce’s Broadband Technology Opportunities Program.</p><p><a href="https://www.hks.harvard.edu/faculty/archon-fung"><strong>Archon Fung</strong></a><strong> | @Arfung</strong><br />Archon Fung is the Winthrop Laflin McCormack Professor of Citizenship and Self-Government at the Harvard Kennedy School. His research explores policies, practices, and institutional designs that deepen the quality of democratic governance. He focuses upon public participation, deliberation, and transparency. He co-directs the Transparency Policy Project and leads democratic governance programs of the Ash Center for Democratic Governance and Innovation at the Kennedy School. His books include <i>Full Disclosure: The Perils and Promise of Transparency</i> (Cambridge University Press, with Mary Graham and David Weil) and <i>Empowered Participation: Reinventing Urban Democracy</i> (Princeton University Press). He has authored five books, four edited collections, and over fifty articles appearing in professional journals. He received two S.B.s — in philosophy and physics — and his Ph.D. in political science from MIT.</p>
]]></description>
      <pubDate>Wed, 3 Feb 2021 20:02:49 +0000</pubDate>
      <author>events@datasociety.net (Lucy Bernholz, Archon Fung, Robyn Caplan, Rob Reich, Seeta Peña Gangadharan)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p><strong>Data & Society</strong> and <a href="https://pacscenter.stanford.edu/#"><strong>Stanford PACS</strong></a> host a special book launch: One of the most far-reaching transformations in our era is the wave of digital technologies rolling over—and upending—nearly every aspect of life. Work and leisure, family and friendship, community and citizenship have all been modified by now-ubiquitous digital tools and platforms. <a href="https://bookshop.org/a/14284/9780226748573"><i>Digital Technology and Democratic Theory</i></a> looks closely at one significant facet of our rapidly evolving digital lives: <strong>how technology is radically changing our lives as citizens and participants in democratic governments</strong>.</p><p>To understand these transformations, this book brings together contributions by scholars from multiple disciplines to wrestle with the question of how digital technologies shape, reshape, and affect fundamental questions about democracy and democratic theory. As expectations have whiplashed—from Twitter optimism in the wake of the Arab Spring to Facebook pessimism in the wake of the 2016 US election—the time is ripe for a more sober and long-term assessment. How should we take stock of digital technologies and their promise and peril for reshaping democratic societies and institutions? To answer, this volume broaches the most pressing technological changes and issues facing democracy as a philosophy and an institution.</p><p><i>Speaker Bios</i></p><p><a href="https://datasociety.net/people/caplan-robyn/"><strong>Robyn Caplan</strong></a><strong> | @robyncaplan</strong><br />Robyn Caplan is a Researcher at Data & Society, and a PhD Candidate at Rutgers University (ABD, advisor Philip M. Napoli) in the School of Communication and Information Studies. She conducts research on issues related to platform governance and content standards. Her most recent work investigates the extent to which organizational dynamics at major platform companies impacts the development and enforcement of policy geared towards limiting disinformation and hate speech, and the impact of regulation, industry coordination, and advocacy can play in changing platform policies.</p><p>Her work has been published in journals such as <i>First Monday</i>, <i>Big Data & Society</i>, and <i>Feminist Media Studies</i>. She has had editorials featured in <i>The New York Times</i>, and her work has been featured by <i>NBC News THINK</i> and <i>Al Jazeera</i>. She has conducted research on a variety of issues regarding data-centric technological development in society, including government data policies, media manipulation, and the use of data in policing.</p><p><a href="https://pacscenter.stanford.edu/person/lucy-bernholz/"><strong>Lucy Bernholz</strong></a><strong> | @p2173</strong><br />Lucy Bernholz is a Senior Research Scholar at Stanford University’s Center on Philanthropy and Civil Society and Director of the Digital Civil Society Lab. She has been a Visiting Scholar at The David and Lucile Packard Foundation, and a Fellow at the Rockefeller Foundation’s Bellagio Center, the Hybrid Reality Institute, and the New America Foundation. She is the author of numerous articles and books, including the annual <i>Blueprint Series on Philanthropy and the Social Economy</i>, the 2010 publication <i>Disrupting Philanthropy</i>, and her 2004 book <i>Creating Philanthropic Capital Markets: The Deliberate Evolution</i>. She is a co-editor of <i>Philanthropy in Democratic Societies</i> (2016, Chicago University Press) and of the forthcoming volume <i>Digital Technology and Democratic Theory</i>. She writes extensively on philanthropy, technology, and policy on her award winning blog, philanthropy2173.com.</p><p>She studied history and has a B.A. from Yale University, where she played field hockey and captained the lacrosse team, and an M.A. and Ph.D. from Stanford University.</p><p><a href="https://pacscenter.stanford.edu/person/rob-reich/"><strong>Rob Reich</strong></a><strong> | @robreich</strong><br />Rob Reich is professor of political science and, by courtesy, professor of philosophy at the Graduate School of Education, at Stanford University. He is the director of the Center for Ethics in Society and co-director of the Center on Philanthropy and Civil Society (publisher of the <i>Stanford Social Innovation Review</i>), both at Stanford University. He is the author most recently of <i>Just Giving: Why Philanthropy is Failing Democracy and How It Can Do Better</i> (Princeton University Press, 2018) and <i>Philanthropy in Democratic Societies: History, Institutions, Values</i> (edited with Chiara Cordelli and Lucy Bernholz, University of Chicago Press, 2016). He is also the author of several books on education: <i>Bridging Liberalism and Multiculturalism in American Education</i> (University of Chicago Press, 2002) and <i>Education, Justice, and Democracy</i> (edited with Danielle Allen, University of Chicago Press, 2013). His current work focuses on ethics, public policy, and technology, and he serves as associate director of the Human-Centered Artificial Intelligence initiative at Stanford. Rob is the recipient of multiple teaching awards, including the Walter J. Gores award, Stanford’s highest honor for teaching. Reich was a sixth grade teacher at Rusk Elementary School in Houston, Texas before attending graduate school. He is a board member of the magazine Boston Review, of Giving Tuesday, and at the Spencer Foundation. More details at his personal webpage: http://robreich.stanford.edu</p><p><a href="https://www.lse.ac.uk/media-and-communications/people/academic-staff/seeta-pena-gangadharan"><strong>Seeta Peña Gangadharan</strong></a><br />Dr Seeta Peña Gangadharan is Associate Professor in the Department of Media and Communications at the London School of Economics and Political Science. Her work focuses on inclusion, exclusion, and marginalization, as well as questions around democracy, social justice, and technological governance. She currently co-leads two projects: Our Data Bodies, which examines the impact of data collection and data-driven technologies on members of marginalized communities in the United States, and Justice, Equity, and Technology, which explores the impacts of data-driven technologies and infrastructures on European civil society. She is also a visiting scholar in the School of Media Studies at The New School, Affiliated Fellow of Yale Law School’s Information Society Project, and Affiliate Fellow of Data & Society Research Institute.</p><p>Before joining the Department in 2015, Seeta was Senior Research Fellow at New America’s Open Technology Institute, addressing policies and practices related to digital inclusion, privacy, and “big data.” Before OTI, she was a Postdoctoral Associate in Law and MacArthur Fellow at Yale Law School’s Information Society Project. She received her PhD from Stanford University and holds an MSc from the Department of Media and Communications at the London School of Economics and Political Science.</p><p>Seeta’s research has been supported by grants from Digital Trust Foundation, Institute of Museum and Library Services, Ford Foundation, Open Society Foundations, Stanford University’s Center on Philanthropy and Civil Society, and U.S. Department of Commerce’s Broadband Technology Opportunities Program.</p><p><a href="https://www.hks.harvard.edu/faculty/archon-fung"><strong>Archon Fung</strong></a><strong> | @Arfung</strong><br />Archon Fung is the Winthrop Laflin McCormack Professor of Citizenship and Self-Government at the Harvard Kennedy School. His research explores policies, practices, and institutional designs that deepen the quality of democratic governance. He focuses upon public participation, deliberation, and transparency. He co-directs the Transparency Policy Project and leads democratic governance programs of the Ash Center for Democratic Governance and Innovation at the Kennedy School. His books include <i>Full Disclosure: The Perils and Promise of Transparency</i> (Cambridge University Press, with Mary Graham and David Weil) and <i>Empowered Participation: Reinventing Urban Democracy</i> (Princeton University Press). He has authored five books, four edited collections, and over fifty articles appearing in professional journals. He received two S.B.s — in philosophy and physics — and his Ph.D. in political science from MIT.</p>
]]></content:encoded>
      <enclosure length="71785908" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/8a628a10-20ec-4379-927e-e1ff3e777271/audio/ae5f7e90-953c-4290-9a6b-889e6bfc7787/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Digital Technology and Democratic Theory</itunes:title>
      <itunes:author>Lucy Bernholz, Archon Fung, Robyn Caplan, Rob Reich, Seeta Peña Gangadharan</itunes:author>
      <itunes:duration>01:14:41</itunes:duration>
      <itunes:summary>How is technology radically changing our lives as citizens and participants in democratic governments? Data &amp; Society and Stanford PACS present this special book launch with Robyn Caplan, Lucy Bernholz, Rob Reich, Seeta Peña Gangadharan, and Archon Fung.</itunes:summary>
      <itunes:subtitle>How is technology radically changing our lives as citizens and participants in democratic governments? Data &amp; Society and Stanford PACS present this special book launch with Robyn Caplan, Lucy Bernholz, Rob Reich, Seeta Peña Gangadharan, and Archon Fung.</itunes:subtitle>
      <itunes:keywords>democracy, platforms, technology</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>78</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">66d1bb87-16d4-4b9d-8dce-e8890f1888ce</guid>
      <title>Governing an Algorithm in the Wild</title>
      <description><![CDATA[<p>Algorithms make a wide range of morally important decisions, and many people now argue that members of the public should be more directly involved in deciding the moral tradeoffs that such systems entail. But most ideas for public or stakeholder involvement are still on the drawing board, and there are few real stories of public deliberation over the design of a morally important algorithm. This talk explores one such story.</p><p>On December 4, 2014, the algorithm that allocates kidneys for transplant in the United States was replaced, following more than a decade of debate and planning. The development process was highly transparent and participatory, faced hard ethical questions explicitly, and incorporated elements of simulation and auditing that scholars often recommend. Scientist and researcher <strong>David Robinson</strong> describes how this story played out — including a twist ending — and will draw out four broader lessons to inform the design of participation strategies for other high stakes algorithms. The talk is hosted by Data & Society Senior Researcher, <strong>Alex Rosenblat</strong>.</p>
]]></description>
      <pubDate>Wed, 13 Jan 2021 20:26:20 +0000</pubDate>
      <author>events@datasociety.net (David Robinson, Alex Rosenblat)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Algorithms make a wide range of morally important decisions, and many people now argue that members of the public should be more directly involved in deciding the moral tradeoffs that such systems entail. But most ideas for public or stakeholder involvement are still on the drawing board, and there are few real stories of public deliberation over the design of a morally important algorithm. This talk explores one such story.</p><p>On December 4, 2014, the algorithm that allocates kidneys for transplant in the United States was replaced, following more than a decade of debate and planning. The development process was highly transparent and participatory, faced hard ethical questions explicitly, and incorporated elements of simulation and auditing that scholars often recommend. Scientist and researcher <strong>David Robinson</strong> describes how this story played out — including a twist ending — and will draw out four broader lessons to inform the design of participation strategies for other high stakes algorithms. The talk is hosted by Data & Society Senior Researcher, <strong>Alex Rosenblat</strong>.</p>
]]></content:encoded>
      <enclosure length="52247449" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/42e1c304-7ed0-41d3-abe8-fc5ca44632eb/audio/74d8dbd8-dc38-4787-9dda-b968515ab7cb/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Governing an Algorithm in the Wild</itunes:title>
      <itunes:author>David Robinson, Alex Rosenblat</itunes:author>
      <itunes:duration>00:54:25</itunes:duration>
      <itunes:summary>In this talk, Scientist and Researcher David Robinson and Data &amp; Society Senior Researcher Alex Rosenblat discuss a healthcare algorithm as a case study for algorithmic governance.</itunes:summary>
      <itunes:subtitle>In this talk, Scientist and Researcher David Robinson and Data &amp; Society Senior Researcher Alex Rosenblat discuss a healthcare algorithm as a case study for algorithmic governance.</itunes:subtitle>
      <itunes:keywords>algorithms, algorithmic governance</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>77</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">e97ca8fd-a246-4c65-af93-cfd28bc0a650</guid>
      <title>Lawgorithms: Everything Poverty Lawyers Need to Know About Tech, Law, and Social Justice</title>
      <description><![CDATA[<p>Automated decision-making systems make decisions about our lives, and those with low socioeconomic status often bear the brunt of the harms these systems cause. <a href="https://datasociety.net/library/poverty-lawgorithms/"><i>Poverty Lawgorithms: A Poverty Lawyers Guide to Fighting Automated Decision-Making Harms on Low-Income Communities</i></a> is a guide by Data & Society Faculty Fellow <strong>Michele Gilman </strong>to familiarize fellow poverty and civil legal services lawyers with the ins and outs of data-centric and automated-decision making systems so that they can clearly understand the sources of the problems their clients are facing and effectively advocate on their behalf.</p>
]]></description>
      <pubDate>Tue, 8 Dec 2020 21:44:08 +0000</pubDate>
      <author>events@datasociety.net (Meredith Broussard, Michele Gilman)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Automated decision-making systems make decisions about our lives, and those with low socioeconomic status often bear the brunt of the harms these systems cause. <a href="https://datasociety.net/library/poverty-lawgorithms/"><i>Poverty Lawgorithms: A Poverty Lawyers Guide to Fighting Automated Decision-Making Harms on Low-Income Communities</i></a> is a guide by Data & Society Faculty Fellow <strong>Michele Gilman </strong>to familiarize fellow poverty and civil legal services lawyers with the ins and outs of data-centric and automated-decision making systems so that they can clearly understand the sources of the problems their clients are facing and effectively advocate on their behalf.</p>
]]></content:encoded>
      <enclosure length="0" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/d37b9933-87e2-41cf-9ebf-2feba6b335a1/audio/f0f9518a-5204-492a-b849-cde83ccf6869/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Lawgorithms: Everything Poverty Lawyers Need to Know About Tech, Law, and Social Justice</itunes:title>
      <itunes:author>Meredith Broussard, Michele Gilman</itunes:author>
      <itunes:duration>00:56:43</itunes:duration>
      <itunes:summary>Michele Gilman joins Professor Meredith Broussard in a conversation on enhancing the digital literacy of poverty lawyers to better advocate for social justice and the low-income communities they serve.</itunes:summary>
      <itunes:subtitle>Michele Gilman joins Professor Meredith Broussard in a conversation on enhancing the digital literacy of poverty lawyers to better advocate for social justice and the low-income communities they serve.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>76</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">48c5b5cc-191e-43dc-8eee-fda4c1482971</guid>
      <title>Adtech and the Attention Economy</title>
      <description><![CDATA[<p>Drawing on Tim Hwang’s new book, <a href="https://bookshop.org/a/14284/9780374538651" target="_blank"><i>Subprime Attention Crisis</i></a>, a revealing examination of digital advertising and the internet’s precarious foundation, this talk details how digital advertising—the beating heart of the internet—is at risk of collapsing. From the unreliability of advertising numbers and the unregulated automation of advertising bidding wars, to the simple fact that online ads mostly fail to work, Hwang demonstrates that while consumers’ attention has never been more prized, the true value of that attention itself is wildly misrepresented. Audience Q&A follows the discussion.</p><p><i>“</i>In this well-grounded, heretical attack on the fictions that uphold the online advertising ecosystem, <i>Subprime Attention Crisis</i> destroys the illusion that programmatic ads are effective and financially sound. One can only hope that this book will be used to pop the bubble that benefits so few.<i>”</i> — danah boyd, author of <i>It’s Complicated: The Social Lives of Networked Teens</i>, founder of Data & Society, and Principal Researcher at Microsoft Research</p>
]]></description>
      <pubDate>Tue, 8 Dec 2020 21:30:05 +0000</pubDate>
      <author>events@datasociety.net (Moira Weigel, Tim Hwang)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Drawing on Tim Hwang’s new book, <a href="https://bookshop.org/a/14284/9780374538651" target="_blank"><i>Subprime Attention Crisis</i></a>, a revealing examination of digital advertising and the internet’s precarious foundation, this talk details how digital advertising—the beating heart of the internet—is at risk of collapsing. From the unreliability of advertising numbers and the unregulated automation of advertising bidding wars, to the simple fact that online ads mostly fail to work, Hwang demonstrates that while consumers’ attention has never been more prized, the true value of that attention itself is wildly misrepresented. Audience Q&A follows the discussion.</p><p><i>“</i>In this well-grounded, heretical attack on the fictions that uphold the online advertising ecosystem, <i>Subprime Attention Crisis</i> destroys the illusion that programmatic ads are effective and financially sound. One can only hope that this book will be used to pop the bubble that benefits so few.<i>”</i> — danah boyd, author of <i>It’s Complicated: The Social Lives of Networked Teens</i>, founder of Data & Society, and Principal Researcher at Microsoft Research</p>
]]></content:encoded>
      <enclosure length="54796371" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/8fca3f4b-3c9f-4b69-a3c9-991cabee551d/audio/65ad0331-1f52-44f4-821c-60fb81c92eeb/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Adtech and the Attention Economy</itunes:title>
      <itunes:author>Moira Weigel, Tim Hwang</itunes:author>
      <itunes:duration>00:56:54</itunes:duration>
      <itunes:summary>Data &amp; Society Sociotechnical Security Researcher Moira Weigel hosts author Tim Hwang to discuss the way big tech financializes attention. Weigel and Hwang explore how the false promises of adtech are just one example of tech-solutionism’s many fictions. </itunes:summary>
      <itunes:subtitle>Data &amp; Society Sociotechnical Security Researcher Moira Weigel hosts author Tim Hwang to discuss the way big tech financializes attention. Weigel and Hwang explore how the false promises of adtech are just one example of tech-solutionism’s many fictions. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>74</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">98c19962-bef8-4f07-a248-e1e80505e7c3</guid>
      <title>Electionland Misinformation</title>
      <description><![CDATA[<p>ProPublica editor and reporter Ryan McCarthy and Data & Society Senior Research Analyst Cristina López G. have looked into dynamics of amplification, inconsistent enforcement of community standards, and the democratic pitfalls of hyper-targeting audiences in their reporting and research. In this Databite, they discuss their findings and recommendations for holding companies accountable, protecting voting rights, and stopping the spread of false election claims. Audience Q&A follows the discussion.</p><p><strong>Ryan McCarthy</strong> reports and edits stories for ProPublica’s Electionland, focusing on voting rights, election security and misinformation.</p><p><strong>Cristina López G.</strong> conducts qualitative research on political disinformation and antagonistic amplification. She was born and raised in El Salvador, where she received an undergraduate law degree from Escuela Superior de Economía y Negocios (ESEN) and led a non-profit that promotes youth participation in politics and activism. She’s been a weekly op-ed columnist for a main Salvadoran newspaper since 2010. She moved to Washington, DC in 2012 to pursue a Masters in public policy from Georgetown University. After completing her degree, Cristina joined Media Matters for America as a researcher of Hispanic and Spanish-language media, focused on media coverage of immigration policies. She eventually became the organization’s deputy director for extremism, leading its research into extremism and disinformation that proliferate on tech platforms. She’s fluent in Spanish and memes.</p>
]]></description>
      <pubDate>Wed, 28 Oct 2020 19:48:07 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>ProPublica editor and reporter Ryan McCarthy and Data & Society Senior Research Analyst Cristina López G. have looked into dynamics of amplification, inconsistent enforcement of community standards, and the democratic pitfalls of hyper-targeting audiences in their reporting and research. In this Databite, they discuss their findings and recommendations for holding companies accountable, protecting voting rights, and stopping the spread of false election claims. Audience Q&A follows the discussion.</p><p><strong>Ryan McCarthy</strong> reports and edits stories for ProPublica’s Electionland, focusing on voting rights, election security and misinformation.</p><p><strong>Cristina López G.</strong> conducts qualitative research on political disinformation and antagonistic amplification. She was born and raised in El Salvador, where she received an undergraduate law degree from Escuela Superior de Economía y Negocios (ESEN) and led a non-profit that promotes youth participation in politics and activism. She’s been a weekly op-ed columnist for a main Salvadoran newspaper since 2010. She moved to Washington, DC in 2012 to pursue a Masters in public policy from Georgetown University. After completing her degree, Cristina joined Media Matters for America as a researcher of Hispanic and Spanish-language media, focused on media coverage of immigration policies. She eventually became the organization’s deputy director for extremism, leading its research into extremism and disinformation that proliferate on tech platforms. She’s fluent in Spanish and memes.</p>
]]></content:encoded>
      <enclosure length="53465142" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/788cd7dd-8e5d-49c6-9378-1438fadb2135/audio/5de689eb-200f-4d51-bd07-a04346d7c66c/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Electionland Misinformation</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:55:31</itunes:duration>
      <itunes:summary>How does political misinformation—and outright lies—get amplified on social media and tech platforms?</itunes:summary>
      <itunes:subtitle>How does political misinformation—and outright lies—get amplified on social media and tech platforms?</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>75</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">71698a31-a132-4479-8eb8-99a4d38c862c</guid>
      <title>Metrics, Media, and Race</title>
      <description><![CDATA[<p><a href="https://www.freepress.net/about/staff/joseph-torres" target="_blank"><strong>Joseph Torres</strong></a>, Free Press’ Senior Director of Strategy and Engagement, advocates in Washington to ensure that our nation’s media policies serve the public interest, and builds coalitions to broaden the movement’s base. Joseph writes frequently on media and internet issues and is the co-author of <i>The New York Times</i> bestseller <a href="https://bookshop.org/a/14284/9781844671113" target="_blank"><i>News for All the People: The Epic Story of Race and the American Media</i></a>. He is the 2015 recipient of the Everett C. Parker Award, which recognizes an individual whose work embodies the principles and values of the public interest. Before joining Free Press, Joseph worked as deputy director of the National Association of Hispanic Journalists and was a journalist for several years.</p><p><a href="http://angelechristin.com/" target="_blank"><strong>Angèle Christin</strong></a> is an assistant professor in the Department of Communication at Stanford University. She studies how algorithms and analytics transform professional values, expertise, and work practices. Her book, <a href="https://press.princeton.edu/books/ebook/9780691200002/metrics-at-work" target="_blank"><i>Metrics at Work: Journalism and the Contested Meaning of Algorithms</i></a> (Princeton University Press, 2020) focuses on the case of web journalism, analyzing the growing importance of audience data in web newsrooms in the U.S. and France. Drawing on ethnographic methods, Angèle shows how American and French journalists make sense of traffic numbers in different ways, which in turn has distinct effects on the production of news in the two countries. Angèle is currently a Visiting Researcher with the Social Media Collective at Microsoft Research New England. She is an affiliate at Data & Society Research Institute.</p><p><a href="https://datasociety.net/people/boyd-danah/"><strong>danah boyd</strong></a> is the founder and president of Data & Society and a partner researcher at Microsoft Research. Her research is focused on making certain that society has a nuanced understanding of the relationship between technology and society, especially as issues of inequity and bias emerge. She is the author of <a href="https://bookshop.org/a/14284/9780300199000" target="_blank"><i>It’s Complicated: The Social Lives of Networked Teens</i></a>, and has authored or co-authored numerous books, articles, and essays. She is a trustee of the National Museum of the American Indian, a director of the Social Science Research Council, and a director of Crisis Text Line. She has been recognized by numerous organizations, including receiving the Electronic Frontier Foundation’s Pioneer/Barlow Award and being selected as a 2011 Young Global Leader of the World Economic Forum. Originally trained in computer science before retraining under anthropologists, danah has a Ph.D. from the University of California at Berkeley’s School of Information.</p>
]]></description>
      <pubDate>Tue, 20 Oct 2020 16:47:25 +0000</pubDate>
      <author>events@datasociety.net (Angèle Christin, danah boyd, Joseph Torres)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p><a href="https://www.freepress.net/about/staff/joseph-torres" target="_blank"><strong>Joseph Torres</strong></a>, Free Press’ Senior Director of Strategy and Engagement, advocates in Washington to ensure that our nation’s media policies serve the public interest, and builds coalitions to broaden the movement’s base. Joseph writes frequently on media and internet issues and is the co-author of <i>The New York Times</i> bestseller <a href="https://bookshop.org/a/14284/9781844671113" target="_blank"><i>News for All the People: The Epic Story of Race and the American Media</i></a>. He is the 2015 recipient of the Everett C. Parker Award, which recognizes an individual whose work embodies the principles and values of the public interest. Before joining Free Press, Joseph worked as deputy director of the National Association of Hispanic Journalists and was a journalist for several years.</p><p><a href="http://angelechristin.com/" target="_blank"><strong>Angèle Christin</strong></a> is an assistant professor in the Department of Communication at Stanford University. She studies how algorithms and analytics transform professional values, expertise, and work practices. Her book, <a href="https://press.princeton.edu/books/ebook/9780691200002/metrics-at-work" target="_blank"><i>Metrics at Work: Journalism and the Contested Meaning of Algorithms</i></a> (Princeton University Press, 2020) focuses on the case of web journalism, analyzing the growing importance of audience data in web newsrooms in the U.S. and France. Drawing on ethnographic methods, Angèle shows how American and French journalists make sense of traffic numbers in different ways, which in turn has distinct effects on the production of news in the two countries. Angèle is currently a Visiting Researcher with the Social Media Collective at Microsoft Research New England. She is an affiliate at Data & Society Research Institute.</p><p><a href="https://datasociety.net/people/boyd-danah/"><strong>danah boyd</strong></a> is the founder and president of Data & Society and a partner researcher at Microsoft Research. Her research is focused on making certain that society has a nuanced understanding of the relationship between technology and society, especially as issues of inequity and bias emerge. She is the author of <a href="https://bookshop.org/a/14284/9780300199000" target="_blank"><i>It’s Complicated: The Social Lives of Networked Teens</i></a>, and has authored or co-authored numerous books, articles, and essays. She is a trustee of the National Museum of the American Indian, a director of the Social Science Research Council, and a director of Crisis Text Line. She has been recognized by numerous organizations, including receiving the Electronic Frontier Foundation’s Pioneer/Barlow Award and being selected as a 2011 Young Global Leader of the World Economic Forum. Originally trained in computer science before retraining under anthropologists, danah has a Ph.D. from the University of California at Berkeley’s School of Information.</p>
]]></content:encoded>
      <enclosure length="51643512" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/8a2ff3a8-efd8-401d-9ec8-10cf844618ab/audio/8658e221-27c4-4bec-9c82-a80dc4578429/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Metrics, Media, and Race</itunes:title>
      <itunes:author>Angèle Christin, danah boyd, Joseph Torres</itunes:author>
      <itunes:duration>00:53:37</itunes:duration>
      <itunes:summary>Does the current metrics-driven news landscape emerge from, and potentially reinforce historical racial inequality? Angèle Christin, author of the new book &quot;Metrics at Work: Journalism and the Contested Meaning of Algorithms&quot; and Joseph Torres, author of the classic &quot;News For All the People: The Epic Story of Race and the American Media&quot; (co-authored with Juan González)  analyze racial divisions in media-making and its interplay with data-centric technology. This conversation is hosted by Data &amp; Society Founder and President danah boyd.</itunes:summary>
      <itunes:subtitle>Does the current metrics-driven news landscape emerge from, and potentially reinforce historical racial inequality? Angèle Christin, author of the new book &quot;Metrics at Work: Journalism and the Contested Meaning of Algorithms&quot; and Joseph Torres, author of the classic &quot;News For All the People: The Epic Story of Race and the American Media&quot; (co-authored with Juan González)  analyze racial divisions in media-making and its interplay with data-centric technology. This conversation is hosted by Data &amp; Society Founder and President danah boyd.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>73</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">69f49142-4f6f-444c-bbcb-31787c050945</guid>
      <title>If Then: How the Simulmatics Corporation Invented the Future</title>
      <description><![CDATA[<p>The Simulmatics Corporation, launched during the Cold War, mined data, targeted voters, manipulated consumers, destabilized politics, and disordered knowledge―decades before Facebook, Google, and Cambridge Analytica. Lepore, best-selling author of <i>These Truths</i>, came across the company’s papers in MIT’s archives and set out to tell this forgotten history, the long-lost backstory to the methods, and the arrogance, of Silicon Valley.</p><p>Founded in 1959 by some of the nation’s leading social scientists―“the best and the brightest, fatally brilliant, Icaruses with wings of feathers and wax, flying to the sun”―Simulmatics proposed to predict and manipulate the future by way of the computer simulation of human behavior. In summers, with their wives and children in tow, the company’s scientists met on the beach in Long Island under a geodesic, honeycombed dome, where they built a “People Machine” that aimed to model everything from buying a dishwasher to counterinsurgency to casting a vote. Deploying their “People Machine” from New York, Washington, Cambridge, and even Saigon, Simulmatics’ clients included the John F. Kennedy presidential campaign, the <i>New York Times</i>, the Department of Defense, and others: Simulmatics had a hand in everything from political races to the Vietnam War to the Johnson administration’s ill-fated attempt to predict race riots. The scientists of Simulmatics believed they had invented “the A-bomb of the social sciences.” They did not predict that it would take decades to detonate, like a long-buried grenade. But, in the early years of the twenty-first century, that bomb did detonate, creating a world in which corporations collect data and model behavior and target messages about the most ordinary of decisions, leaving people all over the world, long before the global pandemic, crushed by feelings of helplessness. This history has a past;<i> If Then</i> is its cautionary tale.</p><p><strong>Jill Lepore</strong> is the David Woods Kemper ’41 Professor of American History and Affiliate Professor of Law at Harvard University. She is also a staff writer at <i>The New Yorker, </i>and host of the podcast, <a href="https://scholar.harvard.edu/jlepore/thelastarchive.com" target="_blank">The Last Archive</a>. Her many books include <a href="http://www.thesetruthsbook.com/" target="_blank"><i>These Truths: A History of the United States</i></a>(2018),an international bestseller and was named one of <i>Time</i> magazine’s top ten non-fiction books of the decade. (A <a href="http://www.publicseminar.org/2019/05/on-these-truths/" target="_blank">recent essay </a>considers responses to the book.) Her latest book, <a href="http://simulmatics.com/" target="_blank"><i>IF THEN: How the Simulmatics Corporation Invented the Future</i></a>, is available on September 15, 2020.</p><p><strong>danah boyd</strong> is founder and president of <a href="https://www.datasociety.net" target="_blank">Data & Society</a>, a partner researcher at Microsoft Research, and a visiting professor at New York University. Her research is focused on making certain that society has a nuanced understanding of the relationship between technology and society, especially as issues of inequity and bias emerge. More on boyd <a href="https://datasociety.net/people/boyd-danah/">here</a>.</p>
]]></description>
      <pubDate>Tue, 6 Oct 2020 19:54:05 +0000</pubDate>
      <author>events@datasociety.net (danah boyd, Joel Whitney, Jill Lepore)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>The Simulmatics Corporation, launched during the Cold War, mined data, targeted voters, manipulated consumers, destabilized politics, and disordered knowledge―decades before Facebook, Google, and Cambridge Analytica. Lepore, best-selling author of <i>These Truths</i>, came across the company’s papers in MIT’s archives and set out to tell this forgotten history, the long-lost backstory to the methods, and the arrogance, of Silicon Valley.</p><p>Founded in 1959 by some of the nation’s leading social scientists―“the best and the brightest, fatally brilliant, Icaruses with wings of feathers and wax, flying to the sun”―Simulmatics proposed to predict and manipulate the future by way of the computer simulation of human behavior. In summers, with their wives and children in tow, the company’s scientists met on the beach in Long Island under a geodesic, honeycombed dome, where they built a “People Machine” that aimed to model everything from buying a dishwasher to counterinsurgency to casting a vote. Deploying their “People Machine” from New York, Washington, Cambridge, and even Saigon, Simulmatics’ clients included the John F. Kennedy presidential campaign, the <i>New York Times</i>, the Department of Defense, and others: Simulmatics had a hand in everything from political races to the Vietnam War to the Johnson administration’s ill-fated attempt to predict race riots. The scientists of Simulmatics believed they had invented “the A-bomb of the social sciences.” They did not predict that it would take decades to detonate, like a long-buried grenade. But, in the early years of the twenty-first century, that bomb did detonate, creating a world in which corporations collect data and model behavior and target messages about the most ordinary of decisions, leaving people all over the world, long before the global pandemic, crushed by feelings of helplessness. This history has a past;<i> If Then</i> is its cautionary tale.</p><p><strong>Jill Lepore</strong> is the David Woods Kemper ’41 Professor of American History and Affiliate Professor of Law at Harvard University. She is also a staff writer at <i>The New Yorker, </i>and host of the podcast, <a href="https://scholar.harvard.edu/jlepore/thelastarchive.com" target="_blank">The Last Archive</a>. Her many books include <a href="http://www.thesetruthsbook.com/" target="_blank"><i>These Truths: A History of the United States</i></a>(2018),an international bestseller and was named one of <i>Time</i> magazine’s top ten non-fiction books of the decade. (A <a href="http://www.publicseminar.org/2019/05/on-these-truths/" target="_blank">recent essay </a>considers responses to the book.) Her latest book, <a href="http://simulmatics.com/" target="_blank"><i>IF THEN: How the Simulmatics Corporation Invented the Future</i></a>, is available on September 15, 2020.</p><p><strong>danah boyd</strong> is founder and president of <a href="https://www.datasociety.net" target="_blank">Data & Society</a>, a partner researcher at Microsoft Research, and a visiting professor at New York University. Her research is focused on making certain that society has a nuanced understanding of the relationship between technology and society, especially as issues of inequity and bias emerge. More on boyd <a href="https://datasociety.net/people/boyd-danah/">here</a>.</p>
]]></content:encoded>
      <enclosure length="51858747" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd15-b5fd-4134-9316-27de084c9d3e/episodes/bc3b5382-0404-46b0-a3c8-447f06d2ffd4/audio/cd82d60a-4b11-45af-b8d7-f2ae24af8367/default_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>If Then: How the Simulmatics Corporation Invented the Future</itunes:title>
      <itunes:author>danah boyd, Joel Whitney, Jill Lepore</itunes:author>
      <itunes:duration>00:53:56</itunes:duration>
      <itunes:summary>Historian Jill Lepore discusses her new book &quot;If Then&quot; with Data &amp; Society Founder and President danah boyd. This talk is co-presented with Brooklyn Public Library.</itunes:summary>
      <itunes:subtitle>Historian Jill Lepore discusses her new book &quot;If Then&quot; with Data &amp; Society Founder and President danah boyd. This talk is co-presented with Brooklyn Public Library.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>72</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">0841d3ed-c4eb-44d5-b3ad-bf1c0cf42398</guid>
      <title>Origins of Trust and Safety with Alexander Macgillivray and Nicole Wong</title>
      <description><![CDATA[<p>Concurrent with launch of the <a href="https://www.tspa.info">Trust & Safety Professional Association</a>, Alexander Macgillivray and Nicole Wong provide context and suggestions forward as regulation, policy, and public awareness of content moderation and trust and safety issues evolve.</p><p>Audience Q&A follows the discussion.</p><p><strong>Speaker Bios:</strong></p><p><strong>Alexander Macgillivray</strong>, aka “amac,” is curious about many things including law, policy, government, decision making, the Internet, algorithms, social justice, access to information, and the intersection of all of those. He was United States Deputy Chief Technology Officer for the last two plus years of the Obama Administration. He was Twitter‘s General Counsel, and head of Corporate Development, Public Policy, Communications, and Trust & Safety. Before that he was Deputy General Counsel at Google and created the Product Counsel team. He has served on the board of the Campaign for the Female Education (CAMFED) USA, was one of the early Berkman Klein Center folks, was certified as a First Grade Teacher by the State of New Jersey. He is proud to be a board member at Data & Society, Creative Commons, and Alloy.us, and an advisor to the Mozilla Tech Policy Fellows, and part of the founding team of the Trust & Safety Professional Association. https://www.bricoleur.org/</p><p><strong>Nicole Wong</strong> develops tech international privacy, content, and regulatory strategies. She previously served as Deputy U.S. Chief Technology Officer in the Obama Administration, focused on internet, privacy, and innovation policy. Prior to her time in government, Nicole was Google’s Vice President and Deputy General Counsel, and Twitter’s Legal Director for Products. She frequently speaks on issues related to law and technology, including five appearances before the U.S. Congress. Nicole chairs the board of Friends of Global Voices, a non-profit organization dedicated to supporting citizen and online media projects globally, and sits on the boards of WITNESS, an organization supporting the use of video to advance human rights; the Mozilla Foundation, which promotes the open internet; and The Markup, a non-profit investigative news organization covering technology. Nicole currently serves as co-chair of the Digital Freedom Forum. More info here: about.me/nwong.</p><p><strong>Robyn Caplan</strong> is a Researcher at Data & Society, researching issues related to platform governance and content standards. Her most recent work investigates the extent to which organizational dynamics at major platform companies impacts the development and enforcement of policy geared towards limiting disinformation and hate speech, and the impact of regulation, industry coordination, and advocacy can play in changing platform policies. Her work has been published in journals such as First Monday, Big Data & Society, and Feminist Media Studies. She has had editorials featured in The New York Times, and her work has been featured by NBC News THINK and Al Jazeera. She has conducted research on a variety of issues regarding data-centric technological development on society, including government data policies, media manipulation, and the use of data in policing.</p>
]]></description>
      <pubDate>Wed, 22 Jul 2020 18:35:58 +0000</pubDate>
      <author>events@datasociety.net (Nicole Wong, Robyn Caplan, Alexander Macgillivray)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Concurrent with launch of the <a href="https://www.tspa.info">Trust & Safety Professional Association</a>, Alexander Macgillivray and Nicole Wong provide context and suggestions forward as regulation, policy, and public awareness of content moderation and trust and safety issues evolve.</p><p>Audience Q&A follows the discussion.</p><p><strong>Speaker Bios:</strong></p><p><strong>Alexander Macgillivray</strong>, aka “amac,” is curious about many things including law, policy, government, decision making, the Internet, algorithms, social justice, access to information, and the intersection of all of those. He was United States Deputy Chief Technology Officer for the last two plus years of the Obama Administration. He was Twitter‘s General Counsel, and head of Corporate Development, Public Policy, Communications, and Trust & Safety. Before that he was Deputy General Counsel at Google and created the Product Counsel team. He has served on the board of the Campaign for the Female Education (CAMFED) USA, was one of the early Berkman Klein Center folks, was certified as a First Grade Teacher by the State of New Jersey. He is proud to be a board member at Data & Society, Creative Commons, and Alloy.us, and an advisor to the Mozilla Tech Policy Fellows, and part of the founding team of the Trust & Safety Professional Association. https://www.bricoleur.org/</p><p><strong>Nicole Wong</strong> develops tech international privacy, content, and regulatory strategies. She previously served as Deputy U.S. Chief Technology Officer in the Obama Administration, focused on internet, privacy, and innovation policy. Prior to her time in government, Nicole was Google’s Vice President and Deputy General Counsel, and Twitter’s Legal Director for Products. She frequently speaks on issues related to law and technology, including five appearances before the U.S. Congress. Nicole chairs the board of Friends of Global Voices, a non-profit organization dedicated to supporting citizen and online media projects globally, and sits on the boards of WITNESS, an organization supporting the use of video to advance human rights; the Mozilla Foundation, which promotes the open internet; and The Markup, a non-profit investigative news organization covering technology. Nicole currently serves as co-chair of the Digital Freedom Forum. More info here: about.me/nwong.</p><p><strong>Robyn Caplan</strong> is a Researcher at Data & Society, researching issues related to platform governance and content standards. Her most recent work investigates the extent to which organizational dynamics at major platform companies impacts the development and enforcement of policy geared towards limiting disinformation and hate speech, and the impact of regulation, industry coordination, and advocacy can play in changing platform policies. Her work has been published in journals such as First Monday, Big Data & Society, and Feminist Media Studies. She has had editorials featured in The New York Times, and her work has been featured by NBC News THINK and Al Jazeera. She has conducted research on a variety of issues regarding data-centric technological development on society, including government data policies, media manipulation, and the use of data in policing.</p>
]]></content:encoded>
      <enclosure length="56655251" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/1724c71a-7b80-48ad-a881-bf48d0d2dc3f/db134-timeline-formixdown-final_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Origins of Trust and Safety with Alexander Macgillivray and Nicole Wong</itunes:title>
      <itunes:author>Nicole Wong, Robyn Caplan, Alexander Macgillivray</itunes:author>
      <itunes:duration>00:58:51</itunes:duration>
      <itunes:summary>Data &amp; Society Researcher Robyn Caplan moderates a discussion with Alexander Macgillivray and Nicole Wong on the professionalization of Trust and Safety teams and their influence in the current media and tech company landscape. </itunes:summary>
      <itunes:subtitle>Data &amp; Society Researcher Robyn Caplan moderates a discussion with Alexander Macgillivray and Nicole Wong on the professionalization of Trust and Safety teams and their influence in the current media and tech company landscape. </itunes:subtitle>
      <itunes:keywords>content moderation, platform governance, platforms, trust and safety</itunes:keywords>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>71</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">8734e25b-0491-4866-b564-f5ea77928522</guid>
      <title>Fellows Talks with Michele Gilman, Anita Say Chan, and Dan Bouk</title>
      <description><![CDATA[<p><strong>The Class Differential in Data Privacy | Michele Gilman</strong><br />Data & Society Faculty Fellow Michele Gilman discusses the ways that data-centric technologies adversely impact low-income communities. In her talk, Gilman argues there is a class differential in privacy law that harms poor people, but that poverty lawyers and their clients are working to challenge this differential in order to advance economic justice.</p><p><strong>Feminist Data Futures and Relational Infrastructures | Anita Say Chan</strong><br />Data & Society Fellow Anita Say Chan shares her work on data justice networks and research collectives in the global Americas, exploring their shared genealogies with feminist data methods developed at the turn of the century.</p><p><strong>The Depth of the Data | Dan Bouk</strong><br />Data isn’t simple, thin, or objective. Data has depth, that can and must be read deeply. Data & Society Fellow Dan Bouk demonstrates such reading in this talk with democracy’s data, the data produced by the U.S. census.</p><p>Data & Society’s Director of Research <strong>Sareeta Amrute</strong> moderates the discussion and audience Q&A. Learn more about our fellows work, wide-ranging interdisciplinary connections, and a few of the provocative questions that have emerged this year.</p>
]]></description>
      <pubDate>Mon, 6 Jul 2020 22:57:21 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p><strong>The Class Differential in Data Privacy | Michele Gilman</strong><br />Data & Society Faculty Fellow Michele Gilman discusses the ways that data-centric technologies adversely impact low-income communities. In her talk, Gilman argues there is a class differential in privacy law that harms poor people, but that poverty lawyers and their clients are working to challenge this differential in order to advance economic justice.</p><p><strong>Feminist Data Futures and Relational Infrastructures | Anita Say Chan</strong><br />Data & Society Fellow Anita Say Chan shares her work on data justice networks and research collectives in the global Americas, exploring their shared genealogies with feminist data methods developed at the turn of the century.</p><p><strong>The Depth of the Data | Dan Bouk</strong><br />Data isn’t simple, thin, or objective. Data has depth, that can and must be read deeply. Data & Society Fellow Dan Bouk demonstrates such reading in this talk with democracy’s data, the data produced by the U.S. census.</p><p>Data & Society’s Director of Research <strong>Sareeta Amrute</strong> moderates the discussion and audience Q&A. Learn more about our fellows work, wide-ranging interdisciplinary connections, and a few of the provocative questions that have emerged this year.</p>
]]></content:encoded>
      <enclosure length="57997323" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/12401404-dbf5-44b4-87ec-8c32a26eaacf/db133-timeline-mixdown_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Fellows Talks with Michele Gilman, Anita Say Chan, and Dan Bouk</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>01:00:14</itunes:duration>
      <itunes:summary>This Fellows Talks Databite showcases our 2019-2020 fellows cohort: Michele Gilman, Anita Say Chan, and Dan Bouk. </itunes:summary>
      <itunes:subtitle>This Fellows Talks Databite showcases our 2019-2020 fellows cohort: Michele Gilman, Anita Say Chan, and Dan Bouk. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>70</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">9dbcdd92-e4dc-4b5a-a4be-edf4f688c49b</guid>
      <title>On Race and Technoculture | Part II</title>
      <description><![CDATA[<p>This recording is a Q&A with André Brock following his presentation of <i>Distributed Blackness: African American Cybercultures.</i></p><p>In <i>Distributed Blackness</i>, Brock asks where Blackness manifests in the ideology of Western technoculture. Using critical technocultural discourse analysis (Brock, 2018), Afro-optimism, and libidinal economic theory, this talk employs Black Twitter as an exemplar of Black cyberculture: digital practice and artifacts informed by a Black aesthetic.</p><p>Technoculture is the American mythos (Dinerstein, 2006) and ideology; a belief system powering the coercive, political, and carceral relations between culture and technology. Once enslaved, historically disenfranchised, never deemed literate, Blackness is understood as the object of Western technical and civilizational practices. This critical intervention for internet research and science and technology studies (STS) reorients Western technoculture’s practices of “race-as- technology” (Chun 2009) to visualize Blackness as technological subjects rather than as “things.” Hence, Black technoculture.</p>
]]></description>
      <pubDate>Mon, 8 Jun 2020 18:51:05 +0000</pubDate>
      <author>events@datasociety.net (André Brock, Sareeta Amrute)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>This recording is a Q&A with André Brock following his presentation of <i>Distributed Blackness: African American Cybercultures.</i></p><p>In <i>Distributed Blackness</i>, Brock asks where Blackness manifests in the ideology of Western technoculture. Using critical technocultural discourse analysis (Brock, 2018), Afro-optimism, and libidinal economic theory, this talk employs Black Twitter as an exemplar of Black cyberculture: digital practice and artifacts informed by a Black aesthetic.</p><p>Technoculture is the American mythos (Dinerstein, 2006) and ideology; a belief system powering the coercive, political, and carceral relations between culture and technology. Once enslaved, historically disenfranchised, never deemed literate, Blackness is understood as the object of Western technical and civilizational practices. This critical intervention for internet research and science and technology studies (STS) reorients Western technoculture’s practices of “race-as- technology” (Chun 2009) to visualize Blackness as technological subjects rather than as “things.” Hence, Black technoculture.</p>
]]></content:encoded>
      <enclosure length="26320609" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/33e8dc33-7144-4517-a351-fc5ecf4f4f7a/part2-andre-final_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>On Race and Technoculture | Part II</itunes:title>
      <itunes:author>André Brock, Sareeta Amrute</itunes:author>
      <itunes:duration>00:27:24</itunes:duration>
      <itunes:summary>A Q&amp;A with André Brock, author of &quot;Distributed Blackness: African American Cybercultures,&quot; led by Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:summary>
      <itunes:subtitle>A Q&amp;A with André Brock, author of &quot;Distributed Blackness: African American Cybercultures,&quot; led by Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>69</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">90080831-7a0c-4ec8-904d-522e12eb1d8c</guid>
      <title>On Race and Technoculture | Part I</title>
      <description><![CDATA[<p>In <i>Distributed Blackness</i>, André Brock asks where Blackness manifests in the ideology of Western technoculture. Using critical technocultural discourse analysis (Brock, 2018), Afro-optimism, and libidinal economic theory, this talk employs Black Twitter as an exemplar of Black cyberculture: digital practice and artifacts informed by a Black aesthetic.</p><p>Technoculture is the American mythos (Dinerstein, 2006) and ideology; a belief system powering the coercive, political, and carceral relations between culture and technology. Once enslaved, historically disenfranchised, never deemed literate, Blackness is understood as the object of Western technical and civilizational practices. This critical intervention for internet research and science and technology studies (STS) reorients Western technoculture’s practices of “race-as- technology” (Chun 2009) to visualize Blackness as technological subjects rather than as “things.” Hence, Black technoculture.</p>
]]></description>
      <pubDate>Wed, 3 Jun 2020 18:52:12 +0000</pubDate>
      <author>events@datasociety.net (André Brock, Sareeta Amrute)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In <i>Distributed Blackness</i>, André Brock asks where Blackness manifests in the ideology of Western technoculture. Using critical technocultural discourse analysis (Brock, 2018), Afro-optimism, and libidinal economic theory, this talk employs Black Twitter as an exemplar of Black cyberculture: digital practice and artifacts informed by a Black aesthetic.</p><p>Technoculture is the American mythos (Dinerstein, 2006) and ideology; a belief system powering the coercive, political, and carceral relations between culture and technology. Once enslaved, historically disenfranchised, never deemed literate, Blackness is understood as the object of Western technical and civilizational practices. This critical intervention for internet research and science and technology studies (STS) reorients Western technoculture’s practices of “race-as- technology” (Chun 2009) to visualize Blackness as technological subjects rather than as “things.” Hence, Black technoculture.</p>
]]></content:encoded>
      <enclosure length="26913689" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/10610926-8971-42b3-92af-1de6d5ecb328/andrebrock-podcast-part1_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>On Race and Technoculture | Part I</itunes:title>
      <itunes:author>André Brock, Sareeta Amrute</itunes:author>
      <itunes:duration>00:28:02</itunes:duration>
      <itunes:summary>André Brock gives a talk on his book, &quot;Distributed Blackness: African American Cybercultures,&quot; with Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:summary>
      <itunes:subtitle>André Brock gives a talk on his book, &quot;Distributed Blackness: African American Cybercultures,&quot; with Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>68</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">1f57538b-4f10-4426-8548-8b905b69a6c9</guid>
      <title>Data Feminism</title>
      <description><![CDATA[<p>How can feminist thinking be operationalized into more ethical and equitable data practices? As data are increasingly mobilized in the service of governments and corporations, their unequal conditions of production, asymmetrical methods of application, and unequal effects on both individuals and groups have become increasingly difficult for data scientists—and others who rely on data in their work—to ignore. But it is precisely this power that makes it worth asking: “Data science by whom? Data science for whom? Data science, with whose interests in mind?” These are some questions that emerge from what we call data feminism; a way of thinking about data science and its communication that is informed by the past several decades of intersectional feminist activism and critical thought. This talk draws on insights from the authors' collaboratively crafted book about how challenges to the male/female binary can challenge other hierarchical (and empirically wrong) classification systems; how an understanding of emotion can expand our ideas about effective data visualization; and how the concept of “invisible labor” can expose the significant human efforts required by our automated systems. </p><p><strong>About the Speakers </strong><br /><strong>Catherine D’Ignazio </strong>(she/her) is a hacker mama, scholar, and artist/designer who focuses on feminist technology, data literacy and civic engagement. She has run women’s health hackathons, designed global news recommendation systems, created talking and tweeting water quality sculptures, and led walking data visualizations to envision the future of sea level rise. Her book from MIT Press, Data Feminism, co-authored with Lauren Klein, charts a course for more ethical and empowering data science practices. D’Ignazio is an assistant professor of Urban Science and Planning in the Department of Urban Studies and Planning at MIT where she is the Director of the Data + Feminism Lab. More information about Catherine can be found on her website at www.kanarinka.com. </p><p><strong>Lauren F. Klein </strong>(she/her) is a scholar and teacher whose work crosses the fields of data science, digital humanities, and early American literature. She has designed platforms for exploring the contents of historical newspapers, recreated forgotten visualization schemes with fabric and addressable LEDs, and, with her students, cooked meals from early American recipes—and then visualized the results. In 2017, she was named one of the “rising stars in digital humanities” by Inside Higher Ed. She is the author of An Archive of Taste: Race and Eating in the Early United States (University of Minnesota Press, 2020) and, with Catherine D’Ignazio, Data Feminism (MIT Press, 2020). With Matthew K. Gold, she edits Debates in the Digital Humanities, a hybrid print-digital publication stream that explores debates in the field as they emerge. Klein is an Associate Professor of English and Quantitative Theory & Methods at Emory University, where she also directs the Digital Humanities Lab. More information can be found on her website: lklein.com. </p><p><strong>About Databites </strong></p><p>Data & Society’s “Databites” speaker series presents timely conversations about the purpose and power of technology, bridging our interdisciplinary research with broader public conversations about the societal implications of data and automation. <br /><br /> </p>
]]></description>
      <pubDate>Thu, 21 May 2020 16:09:35 +0000</pubDate>
      <author>events@datasociety.net (Sareeta Amrute, Catherine D&apos;Ignazio, Lauren F. Klein)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>How can feminist thinking be operationalized into more ethical and equitable data practices? As data are increasingly mobilized in the service of governments and corporations, their unequal conditions of production, asymmetrical methods of application, and unequal effects on both individuals and groups have become increasingly difficult for data scientists—and others who rely on data in their work—to ignore. But it is precisely this power that makes it worth asking: “Data science by whom? Data science for whom? Data science, with whose interests in mind?” These are some questions that emerge from what we call data feminism; a way of thinking about data science and its communication that is informed by the past several decades of intersectional feminist activism and critical thought. This talk draws on insights from the authors' collaboratively crafted book about how challenges to the male/female binary can challenge other hierarchical (and empirically wrong) classification systems; how an understanding of emotion can expand our ideas about effective data visualization; and how the concept of “invisible labor” can expose the significant human efforts required by our automated systems. </p><p><strong>About the Speakers </strong><br /><strong>Catherine D’Ignazio </strong>(she/her) is a hacker mama, scholar, and artist/designer who focuses on feminist technology, data literacy and civic engagement. She has run women’s health hackathons, designed global news recommendation systems, created talking and tweeting water quality sculptures, and led walking data visualizations to envision the future of sea level rise. Her book from MIT Press, Data Feminism, co-authored with Lauren Klein, charts a course for more ethical and empowering data science practices. D’Ignazio is an assistant professor of Urban Science and Planning in the Department of Urban Studies and Planning at MIT where she is the Director of the Data + Feminism Lab. More information about Catherine can be found on her website at www.kanarinka.com. </p><p><strong>Lauren F. Klein </strong>(she/her) is a scholar and teacher whose work crosses the fields of data science, digital humanities, and early American literature. She has designed platforms for exploring the contents of historical newspapers, recreated forgotten visualization schemes with fabric and addressable LEDs, and, with her students, cooked meals from early American recipes—and then visualized the results. In 2017, she was named one of the “rising stars in digital humanities” by Inside Higher Ed. She is the author of An Archive of Taste: Race and Eating in the Early United States (University of Minnesota Press, 2020) and, with Catherine D’Ignazio, Data Feminism (MIT Press, 2020). With Matthew K. Gold, she edits Debates in the Digital Humanities, a hybrid print-digital publication stream that explores debates in the field as they emerge. Klein is an Associate Professor of English and Quantitative Theory & Methods at Emory University, where she also directs the Digital Humanities Lab. More information can be found on her website: lklein.com. </p><p><strong>About Databites </strong></p><p>Data & Society’s “Databites” speaker series presents timely conversations about the purpose and power of technology, bridging our interdisciplinary research with broader public conversations about the societal implications of data and automation. <br /><br /> </p>
]]></content:encoded>
      <enclosure length="56555788" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/0759874c-6f52-48b5-9e2d-3b35a9389490/df-timeline-mixdown-final_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Data Feminism</itunes:title>
      <itunes:author>Sareeta Amrute, Catherine D&apos;Ignazio, Lauren F. Klein</itunes:author>
      <itunes:duration>00:58:44</itunes:duration>
      <itunes:summary>Catherine D’Ignazio and Lauren F. Klein discuss their new book &quot;Data Feminism,&quot; with Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:summary>
      <itunes:subtitle>Catherine D’Ignazio and Lauren F. Klein discuss their new book &quot;Data Feminism,&quot; with Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>67</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">55cf9058-f14d-4b8b-90a1-e03494d6c401</guid>
      <title>Design Justice</title>
      <description><![CDATA[<p>Data & Society kicks off our online Databites series with <a href="http://schock.cc/" target="_blank"><strong>Sasha Costanza-Chock</strong></a>, whose new book, <a href="https://mitpress.mit.edu/books/design-justice" target="_blank"><i>Design Justice: Community-Led Practices to Build the Worlds We Need</i></a><i>,</i> re-imagines how design led by marginalized communities can become a tool to help dismantle structural inequality, advance collective liberation, and support ecological survival.</p><p>In this conversation with Data & Society’s Events Producer <a href="https://datasociety.net/people/guzman-rigoberto-lara/">Rigoberto Lara Guzmán</a>, Sasha shares her experience as a design researcher and a practitioner, highlights helpful Design Justice Network best practices, and explores how we might apply the principles of design justice to COVID-19 responses.</p><p>This talk was recorded on Wednesday, April 29, 2020.</p><p> </p><p><strong>About the Speaker and Host</strong></p><p><strong>Sasha Costanza-Chock</strong> (pronouns: they/them or she/her) is a scholar, designer, and media-maker, and currently Associate Professor of Civic Media at MIT. They are a Faculty Associate at the Berkman-Klein Center for Internet & Society at Harvard University, Faculty Affiliate with the MIT Open Documentary Lab, and creator of the <a href="http://codesign.mit.edu/" target="_blank">MIT Codesign Studio</a>(codesign.mit.edu). Their work focuses on social movements, transformative media organizing, and design justice. Sasha’s new book, <a href="https://mitpress.mit.edu/books/design-justice" target="_blank"><i>Design Justice: Community-Led Practices to Build the Worlds We Need</i></a> was published by MIT Press in March 2020. Sasha is a board member of <a href="https://alliedmedia.org/" target="_blank">Allied Media Projects</a> and a Steering Committee member of the <a href="http://designjusticenetwork.org/" target="_blank">Design Justice Network</a>.</p><p><strong>Rigoberto Lara Guzmán</strong> (pronouns: they/them or he/him) is a xicanx producer, artist, and community technologist. His work attends to the interaction of humans, objects, and the lived environment to explore coded knowledge systems and emergent ecologies. He designs experiences that facilitate situated learning and is currently unsettling socio-technical worlds at Data & Society.</p><p><strong>About Databites</strong></p><p>Data & Society’s “Databites” speaker series presents timely conversations about the purpose and power of technology, bridging our interdisciplinary research with broader public conversations about the societal implications of data and automation</p>
]]></description>
      <pubDate>Wed, 6 May 2020 19:29:28 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Data & Society kicks off our online Databites series with <a href="http://schock.cc/" target="_blank"><strong>Sasha Costanza-Chock</strong></a>, whose new book, <a href="https://mitpress.mit.edu/books/design-justice" target="_blank"><i>Design Justice: Community-Led Practices to Build the Worlds We Need</i></a><i>,</i> re-imagines how design led by marginalized communities can become a tool to help dismantle structural inequality, advance collective liberation, and support ecological survival.</p><p>In this conversation with Data & Society’s Events Producer <a href="https://datasociety.net/people/guzman-rigoberto-lara/">Rigoberto Lara Guzmán</a>, Sasha shares her experience as a design researcher and a practitioner, highlights helpful Design Justice Network best practices, and explores how we might apply the principles of design justice to COVID-19 responses.</p><p>This talk was recorded on Wednesday, April 29, 2020.</p><p> </p><p><strong>About the Speaker and Host</strong></p><p><strong>Sasha Costanza-Chock</strong> (pronouns: they/them or she/her) is a scholar, designer, and media-maker, and currently Associate Professor of Civic Media at MIT. They are a Faculty Associate at the Berkman-Klein Center for Internet & Society at Harvard University, Faculty Affiliate with the MIT Open Documentary Lab, and creator of the <a href="http://codesign.mit.edu/" target="_blank">MIT Codesign Studio</a>(codesign.mit.edu). Their work focuses on social movements, transformative media organizing, and design justice. Sasha’s new book, <a href="https://mitpress.mit.edu/books/design-justice" target="_blank"><i>Design Justice: Community-Led Practices to Build the Worlds We Need</i></a> was published by MIT Press in March 2020. Sasha is a board member of <a href="https://alliedmedia.org/" target="_blank">Allied Media Projects</a> and a Steering Committee member of the <a href="http://designjusticenetwork.org/" target="_blank">Design Justice Network</a>.</p><p><strong>Rigoberto Lara Guzmán</strong> (pronouns: they/them or he/him) is a xicanx producer, artist, and community technologist. His work attends to the interaction of humans, objects, and the lived environment to explore coded knowledge systems and emergent ecologies. He designs experiences that facilitate situated learning and is currently unsettling socio-technical worlds at Data & Society.</p><p><strong>About Databites</strong></p><p>Data & Society’s “Databites” speaker series presents timely conversations about the purpose and power of technology, bridging our interdisciplinary research with broader public conversations about the societal implications of data and automation</p>
]]></content:encoded>
      <enclosure length="50268184" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/849f08cf-7fbf-4e29-90a9-1697a8e15471/sasha-timeline-mixdown_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Design Justice</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:52:12</itunes:duration>
      <itunes:summary>Sasha Costanza-Chock discusses her new book &quot;Design Justice: Community-Led Practices to Build the Worlds We Need,&quot; which re-imagines how design led by marginalized communities can become a tool to help dismantle structural inequality, advance collective liberation, and support ecological survival.</itunes:summary>
      <itunes:subtitle>Sasha Costanza-Chock discusses her new book &quot;Design Justice: Community-Led Practices to Build the Worlds We Need,&quot; which re-imagines how design led by marginalized communities can become a tool to help dismantle structural inequality, advance collective liberation, and support ecological survival.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>66</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">df1c2034-da48-4c60-83f9-6d6e4b3ac9aa</guid>
      <title>Community and Accessibility Online</title>
      <description><![CDATA[<p>In this virtual conversation, Blind Accessibility Advocate Chancey Fleet and Artist Taeyoon Choi teach us about network building and activism opportunities. Now that we're more online, how can we seize this moment to build more inclusive, accessible communication and modes of connection? What tools and best practices can we activate in the current moment, and continue to prioritize in our programming moving forward? This talk was recorded on April 15. </p><p><strong>About the Speakers</strong></p><p><strong>Chancey Fleet</strong> is a Brooklyn-based accessibility advocate, coordinates technology education programs at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library. Chancey was recognized as a 2017 Library Journal Mover and Shaker. She writes and presents to disability rights groups, policy-makers, and professionals about the intersections of disability and technology. During her fellowship, she plans to advance public understanding of and explore best practices for visual interpreter services as well as other technologies for accessibility whose implications resonate with the broader global conversations about digital equity, data ethics, and privacy. She proudly serves as the Vice President of the National Federation of the Blind of New York.</p><p><strong>Taeyoon Choi</strong> is an artist, educator, and activist based in New York and Seoul. His art practice involves performance, electronics, drawings, and installations that form the basis for storytelling in public spaces. He co-founded the School for Poetic Computation where he continues to organize sessions and teach classes.</p><p><strong>Resources</strong></p><ul><li><a href="https://rootedinrights.org/">Rooted in Rights</a></li><li>Guide: <a href="https://www.washington.edu/accessibility/online-meetings/">Hosting Accessible Online Meetings</a></li><li><a href="https://www.w3.org/TR/WCAG20/">Web Content Accessibility Guidelines</a></li><li><a href="http://www.perkins.org/stories/nine-essential-tips-for-working-with-people-who-are-blind">Blind Inclusivity Resources</a></li><li><a href="https://ww3.aauw.org/resource/how-to-build-a-phone-tree/">How to Build a Phone Tree</a></li><li><a href="https://merylalper.com/giving-voice/"><i>Giving Voice</i></a> by Meryl Alper</li><li>Y Combinator: <a href="https://news.ycombinator.com/item?id=22918980">I’m a Software Engineer Going Blind -- How Should I Prepare?</a></li><li><a href="https://www.consentfultech.io/">Consentful Tech Project</a></li><li><a href="https://twitter.com/a11ynyc">NYC Accessibility and Inclusive Design Meetup Group</a></li><li><a href="https://medium.com/@shanekanadywfd/covid-19-is-reshaping-the-future-of-work-for-people-with-disabilities-50c55817c232">COVID-19 is reshaping the future of work for people with disabilities</a> by Shane Kanady</li></ul><p>Taeyoon's recommendations:</p><ul><li><a href="http://www.dukeupress.edu/staying-with-the-trouble">Staying with the Trouble</a> by Donna Haraway</li><li><a href="https://www.buzzfeednews.com/article/triciawang/a-lesson-from-the-people-of-wuhan-community-hyper-local">You Can Learn Something From The People Of Wuhan</a> by Tricia Wang</li><li><a href="http://www.artasiapacific.com/News/ResistingTheRacistBlameGame">Resisting the Racist Game</a> on <i>ArtAsiaPacific</i></li><li>"<a href="https://www.instagram.com/p/B9Rt1BxHHZe/">How to Deal with Racism</a>" by Taeyoon Choi on <i>I Weigh</i></li><li><a href="https://beminor.com/detail.php?number=14552&thread=03r01r03">Earth, "Saewol, COVID-19 and the lack of mourning"</a> 세월호와 코로나19, 애도의 부재</li></ul><p>Chancey's recommendations:</p><ul><li><a href="https://twitter.com/ChanceyFleet/status/1252715286882586625">Her Twitter thread on alt-text</a></li><li><a href="https://www.netflix.com/title/81001496">Crip Camp</a> on Netflix</li></ul>
]]></description>
      <pubDate>Wed, 29 Apr 2020 19:46:23 +0000</pubDate>
      <author>events@datasociety.net (Taeyoon Choi, Chancey Fleet, Rigoberto Lara Guzmán)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In this virtual conversation, Blind Accessibility Advocate Chancey Fleet and Artist Taeyoon Choi teach us about network building and activism opportunities. Now that we're more online, how can we seize this moment to build more inclusive, accessible communication and modes of connection? What tools and best practices can we activate in the current moment, and continue to prioritize in our programming moving forward? This talk was recorded on April 15. </p><p><strong>About the Speakers</strong></p><p><strong>Chancey Fleet</strong> is a Brooklyn-based accessibility advocate, coordinates technology education programs at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library. Chancey was recognized as a 2017 Library Journal Mover and Shaker. She writes and presents to disability rights groups, policy-makers, and professionals about the intersections of disability and technology. During her fellowship, she plans to advance public understanding of and explore best practices for visual interpreter services as well as other technologies for accessibility whose implications resonate with the broader global conversations about digital equity, data ethics, and privacy. She proudly serves as the Vice President of the National Federation of the Blind of New York.</p><p><strong>Taeyoon Choi</strong> is an artist, educator, and activist based in New York and Seoul. His art practice involves performance, electronics, drawings, and installations that form the basis for storytelling in public spaces. He co-founded the School for Poetic Computation where he continues to organize sessions and teach classes.</p><p><strong>Resources</strong></p><ul><li><a href="https://rootedinrights.org/">Rooted in Rights</a></li><li>Guide: <a href="https://www.washington.edu/accessibility/online-meetings/">Hosting Accessible Online Meetings</a></li><li><a href="https://www.w3.org/TR/WCAG20/">Web Content Accessibility Guidelines</a></li><li><a href="http://www.perkins.org/stories/nine-essential-tips-for-working-with-people-who-are-blind">Blind Inclusivity Resources</a></li><li><a href="https://ww3.aauw.org/resource/how-to-build-a-phone-tree/">How to Build a Phone Tree</a></li><li><a href="https://merylalper.com/giving-voice/"><i>Giving Voice</i></a> by Meryl Alper</li><li>Y Combinator: <a href="https://news.ycombinator.com/item?id=22918980">I’m a Software Engineer Going Blind -- How Should I Prepare?</a></li><li><a href="https://www.consentfultech.io/">Consentful Tech Project</a></li><li><a href="https://twitter.com/a11ynyc">NYC Accessibility and Inclusive Design Meetup Group</a></li><li><a href="https://medium.com/@shanekanadywfd/covid-19-is-reshaping-the-future-of-work-for-people-with-disabilities-50c55817c232">COVID-19 is reshaping the future of work for people with disabilities</a> by Shane Kanady</li></ul><p>Taeyoon's recommendations:</p><ul><li><a href="http://www.dukeupress.edu/staying-with-the-trouble">Staying with the Trouble</a> by Donna Haraway</li><li><a href="https://www.buzzfeednews.com/article/triciawang/a-lesson-from-the-people-of-wuhan-community-hyper-local">You Can Learn Something From The People Of Wuhan</a> by Tricia Wang</li><li><a href="http://www.artasiapacific.com/News/ResistingTheRacistBlameGame">Resisting the Racist Game</a> on <i>ArtAsiaPacific</i></li><li>"<a href="https://www.instagram.com/p/B9Rt1BxHHZe/">How to Deal with Racism</a>" by Taeyoon Choi on <i>I Weigh</i></li><li><a href="https://beminor.com/detail.php?number=14552&thread=03r01r03">Earth, "Saewol, COVID-19 and the lack of mourning"</a> 세월호와 코로나19, 애도의 부재</li></ul><p>Chancey's recommendations:</p><ul><li><a href="https://twitter.com/ChanceyFleet/status/1252715286882586625">Her Twitter thread on alt-text</a></li><li><a href="https://www.netflix.com/title/81001496">Crip Camp</a> on Netflix</li></ul>
]]></content:encoded>
      <enclosure length="47112813" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/3a7d51d6-7f65-44b2-b019-3d7fce50107b/ph-chancey-taeyoon-mixdown-final_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Community and Accessibility Online</itunes:title>
      <itunes:author>Taeyoon Choi, Chancey Fleet, Rigoberto Lara Guzmán</itunes:author>
      <itunes:duration>00:49:04</itunes:duration>
      <itunes:summary>Blind Accessibility Advocate Chancey Fleet and Artist Taeyoon Choi discuss network building and activism opportunities in the context of COVID-19. </itunes:summary>
      <itunes:subtitle>Blind Accessibility Advocate Chancey Fleet and Artist Taeyoon Choi discuss network building and activism opportunities in the context of COVID-19. </itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>65</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">f70cd3e1-d73e-450f-9d2c-0408889588fd</guid>
      <title>Abolish Big Data</title>
      <description><![CDATA[<p>Big Data is more than a collection of technologies or a revolution in measurement and prediction. It has become a philosophy; an ideological regime that determines how decisions are made, and who makes them. It gives legitimacy to a new form of social and political control that takes the digital traces of our existence and then finds ways to use them to sort and manage populations. Big Data is part of a long and pervasive historical legacy of scientific oppression, aggressive public policy, and the most influential political and economic institution that has shaped and continues to shape this country’s economy: chattel slavery. Algorithms and other data technologies are the engines that facilitate the ongoing evolution of chattel slavery into the Prison Industrial Complex, justify the militarization of schoolyards and borders alike, and continued the expansion of contemporary practices of peonage.</p><p><strong>About the Speaker</strong></p><p><a href="https://twitter.com/yeshican?lang=en" target="_blank">Yeshimabeit Milner</a> is the founder & executive director of <a href="http://d4bl.org/about.html" target="_blank">Data for Black Lives</a>. She has worked since she was 17 behind the scenes as a movement builder, technologist, and data scientist on a number of campaigns. She started Data for Black Lives because for too long she straddled the worlds of data and organizing and was determined to break down the silos to harness the power of data to make change in the lives of Black people. In two years, Data for Black Lives has raised over $2 million, hosted two sold out conferences at the MIT Media Lab and has changed the conversation around big data and technology across the U.S. and globally.</p><p>As the founder of Data for Black Lives, her work has received much acclaim. Yeshimabeit is an Echoing Green Black Male Achievement Fellow, an Ashoka Fellow, and joins the founders of Black Lives Matter and Occupy Wall Street in the distinguished inaugural class of Roddenberry Foundation Fellows and most recently, was named one of Forbes 30 Under 30.</p>
]]></description>
      <pubDate>Tue, 10 Mar 2020 17:25:03 +0000</pubDate>
      <author>events@datasociety.net (Yeshimabeit Milner, Janet Haven)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Big Data is more than a collection of technologies or a revolution in measurement and prediction. It has become a philosophy; an ideological regime that determines how decisions are made, and who makes them. It gives legitimacy to a new form of social and political control that takes the digital traces of our existence and then finds ways to use them to sort and manage populations. Big Data is part of a long and pervasive historical legacy of scientific oppression, aggressive public policy, and the most influential political and economic institution that has shaped and continues to shape this country’s economy: chattel slavery. Algorithms and other data technologies are the engines that facilitate the ongoing evolution of chattel slavery into the Prison Industrial Complex, justify the militarization of schoolyards and borders alike, and continued the expansion of contemporary practices of peonage.</p><p><strong>About the Speaker</strong></p><p><a href="https://twitter.com/yeshican?lang=en" target="_blank">Yeshimabeit Milner</a> is the founder & executive director of <a href="http://d4bl.org/about.html" target="_blank">Data for Black Lives</a>. She has worked since she was 17 behind the scenes as a movement builder, technologist, and data scientist on a number of campaigns. She started Data for Black Lives because for too long she straddled the worlds of data and organizing and was determined to break down the silos to harness the power of data to make change in the lives of Black people. In two years, Data for Black Lives has raised over $2 million, hosted two sold out conferences at the MIT Media Lab and has changed the conversation around big data and technology across the U.S. and globally.</p><p>As the founder of Data for Black Lives, her work has received much acclaim. Yeshimabeit is an Echoing Green Black Male Achievement Fellow, an Ashoka Fellow, and joins the founders of Black Lives Matter and Occupy Wall Street in the distinguished inaugural class of Roddenberry Foundation Fellows and most recently, was named one of Forbes 30 Under 30.</p>
]]></content:encoded>
      <enclosure length="57917902" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/937827c7-4bc1-4781-a871-54a7bb734705/db129-timeline-mixdown-podcast_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Abolish Big Data</itunes:title>
      <itunes:author>Yeshimabeit Milner, Janet Haven</itunes:author>
      <itunes:duration>01:00:14</itunes:duration>
      <itunes:summary>This talk by Yeshimabeit Milner, founder &amp; executive director of Data for Black Lives, serves as a call to action to reject the concentration of Big Data in the hands of a few, to challenge the structures that allow data to be wielded as a weapon of immense political influence. To abolish Big Data would mean to put data in the hands of people who need it the most.</itunes:summary>
      <itunes:subtitle>This talk by Yeshimabeit Milner, founder &amp; executive director of Data for Black Lives, serves as a call to action to reject the concentration of Big Data in the hands of a few, to challenge the structures that allow data to be wielded as a weapon of immense political influence. To abolish Big Data would mean to put data in the hands of people who need it the most.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>64</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">2760f473-341a-4426-a1ff-c32a47066970</guid>
      <title>Uncanny Valley</title>
      <description><![CDATA[<p>Data & Society welcomes writer <a href="http://www.annawiener.com/">Anna Wiener</a> to discuss her debut book, <a href="https://us.macmillan.com/books/9780374719760"><i>Uncanny Valley</i></a>, and experiences navigating the new digital economy.</p><p>Part coming-of-age-story, part portrait of an already-bygone era, Anna Wiener’s memoir of working in Silicon Valley is a rare first-person glimpse into high-flying, reckless startup culture at a time of unchecked ambition, unregulated surveillance, wild fortune, and accelerating political power. Anna deftly charts the tech industry’s shift from self-appointed world savior to democracy-endangering liability, alongside a personal narrative of aspiration, ambivalence, and disillusionment.</p><p>Unsparing and incisive, <i>Uncanny Valley</i> is a cautionary tale, and a revelatory interrogation of a world reckoning with consequences its unwitting designers are only beginning to understand.</p><p>This event is moderated by Data & Society’s Director of Creative Strategy, <a href="https://datasociety.net/people/hinds-sam/">Sam Hinds</a>. It was recorded at Data & Society on January 22, 2020. </p><p> </p>
]]></description>
      <pubDate>Mon, 3 Feb 2020 22:04:20 +0000</pubDate>
      <author>events@datasociety.net (Sam Hinds, Anna Wiener)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Data & Society welcomes writer <a href="http://www.annawiener.com/">Anna Wiener</a> to discuss her debut book, <a href="https://us.macmillan.com/books/9780374719760"><i>Uncanny Valley</i></a>, and experiences navigating the new digital economy.</p><p>Part coming-of-age-story, part portrait of an already-bygone era, Anna Wiener’s memoir of working in Silicon Valley is a rare first-person glimpse into high-flying, reckless startup culture at a time of unchecked ambition, unregulated surveillance, wild fortune, and accelerating political power. Anna deftly charts the tech industry’s shift from self-appointed world savior to democracy-endangering liability, alongside a personal narrative of aspiration, ambivalence, and disillusionment.</p><p>Unsparing and incisive, <i>Uncanny Valley</i> is a cautionary tale, and a revelatory interrogation of a world reckoning with consequences its unwitting designers are only beginning to understand.</p><p>This event is moderated by Data & Society’s Director of Creative Strategy, <a href="https://datasociety.net/people/hinds-sam/">Sam Hinds</a>. It was recorded at Data & Society on January 22, 2020. </p><p> </p>
]]></content:encoded>
      <enclosure length="38977245" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/e70830b0-b33d-4cfe-9483-d375cf86121b/db128-uncannyvalley_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Uncanny Valley</itunes:title>
      <itunes:author>Sam Hinds, Anna Wiener</itunes:author>
      <itunes:duration>00:40:32</itunes:duration>
      <itunes:summary>Anna Wiener discusses her debut book, &quot;Uncanny Valley,&quot; which details her experience navigating the new digital economy.</itunes:summary>
      <itunes:subtitle>Anna Wiener discusses her debut book, &quot;Uncanny Valley,&quot; which details her experience navigating the new digital economy.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>63</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">6315a077-f352-4816-8b01-7b64ac038829</guid>
      <title>An Ecological Approach to Data Governance</title>
      <description><![CDATA[<p>Data are currency. Data provide the fuel for decision-making and profit-making. Data offer evidence for enhancing health services, infrastructure, and zoning, and for addressing environmental concerns. But the collection and use of data is spurring conflicts between cities, corporate and civil society organizations, and constituents. These conflicts occur on the grounds of data ownership, access, privacy, and security.</p><p>Dr. McNealy traces these conflicts to our perception of data as a singular piece of property. A better metaphor for data, she contends, would be that of a networked representation or observation in an ecosystem. Dr. McNealy argues that we require an ecological approach for understanding this era of emergent technology and data — both for creating adequate policy, and for protecting the vulnerable.</p><p>This event is moderated by Data & Society Director of Research Sareeta Amrute.</p><p>For more information about this talk and future events, visit datasociety.net.</p><p><i>This talk was recorded on January 8, 2020. </i></p>
]]></description>
      <pubDate>Mon, 27 Jan 2020 18:32:25 +0000</pubDate>
      <author>events@datasociety.net (Sareeta Amrute, Jasmine McNealy)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Data are currency. Data provide the fuel for decision-making and profit-making. Data offer evidence for enhancing health services, infrastructure, and zoning, and for addressing environmental concerns. But the collection and use of data is spurring conflicts between cities, corporate and civil society organizations, and constituents. These conflicts occur on the grounds of data ownership, access, privacy, and security.</p><p>Dr. McNealy traces these conflicts to our perception of data as a singular piece of property. A better metaphor for data, she contends, would be that of a networked representation or observation in an ecosystem. Dr. McNealy argues that we require an ecological approach for understanding this era of emergent technology and data — both for creating adequate policy, and for protecting the vulnerable.</p><p>This event is moderated by Data & Society Director of Research Sareeta Amrute.</p><p>For more information about this talk and future events, visit datasociety.net.</p><p><i>This talk was recorded on January 8, 2020. </i></p>
]]></content:encoded>
      <enclosure length="54645281" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/47ffa875-d145-4688-aa0d-b5e14d34924c/mcnealy-timeline-mixdown_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>An Ecological Approach to Data Governance</itunes:title>
      <itunes:author>Sareeta Amrute, Jasmine McNealy</itunes:author>
      <itunes:duration>00:56:50</itunes:duration>
      <itunes:summary>Jasmine McNealy presents her talk &quot;An Ecological Approach to Data Governance.&quot; 

</itunes:summary>
      <itunes:subtitle>Jasmine McNealy presents her talk &quot;An Ecological Approach to Data Governance.&quot; 

</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>62</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">08c0fdb7-6a26-4ef4-9b33-641b5335b44f</guid>
      <title>Climate Change and Conspiracy: Networked Disinformation</title>
      <description><![CDATA[<p>Our planet is warming, our seas are rising, and while the human cost of this will be massive, the human cause of it is undeniable. Or at least, it should be. Rising sea levels and the desertification of already dry areas could see millions across our world being displaced. The climate crisis is a massive threat to quality of life, but for some people, it’s also an opportunity. Across Europe, the 2015 migrant crisis destabilized civil society, leading to the rise of the AfD in Germany, the Lega party in Italy, and allowed Viktor Orban to whip up anti-migrant rhetoric in Hungary. These conditions are a petri dish for conspiracy theorists, politicians, corporate interests, and especially, a boon for the rhetoric of extreme anti-migration factions pushing online disinformation.</p><p>This event is moderated by Data & Society founder <a href="https://datasociety.net/people/boyd-danah/" target="_blank">danah boyd</a>.</p><p><i>Recorded on December 4, 2019.</i></p><p><strong>About the Speaker</strong></p><p>Dr. Joe Mulhall is Senior Researcher at <a href="https://www.hopenothate.org.uk/" target="_blank">HOPE not hate</a>, the UK’s largest anti-fascism and anti-racism organisation. He is a historian of postwar and contemporary fascism and completed his PhD at Royal Holloway, University of London. He sits on the Board of the UK Government funded Holocaust Memorial Day Trust. He has published extensively, both academically and journalistically, and appears regularly in the international news media and gives talks around the world about his research. He has two forthcoming academic books with Routledge in 2020 including <i>The Alt-Right: Fascism for the 21st Century</i>.</p><p>For more information, visit <a href="https://www.datasociety.net">datasociety.net</a>.</p>
]]></description>
      <pubDate>Wed, 22 Jan 2020 16:56:13 +0000</pubDate>
      <author>events@datasociety.net (danah boyd, Joe Mulhall)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Our planet is warming, our seas are rising, and while the human cost of this will be massive, the human cause of it is undeniable. Or at least, it should be. Rising sea levels and the desertification of already dry areas could see millions across our world being displaced. The climate crisis is a massive threat to quality of life, but for some people, it’s also an opportunity. Across Europe, the 2015 migrant crisis destabilized civil society, leading to the rise of the AfD in Germany, the Lega party in Italy, and allowed Viktor Orban to whip up anti-migrant rhetoric in Hungary. These conditions are a petri dish for conspiracy theorists, politicians, corporate interests, and especially, a boon for the rhetoric of extreme anti-migration factions pushing online disinformation.</p><p>This event is moderated by Data & Society founder <a href="https://datasociety.net/people/boyd-danah/" target="_blank">danah boyd</a>.</p><p><i>Recorded on December 4, 2019.</i></p><p><strong>About the Speaker</strong></p><p>Dr. Joe Mulhall is Senior Researcher at <a href="https://www.hopenothate.org.uk/" target="_blank">HOPE not hate</a>, the UK’s largest anti-fascism and anti-racism organisation. He is a historian of postwar and contemporary fascism and completed his PhD at Royal Holloway, University of London. He sits on the Board of the UK Government funded Holocaust Memorial Day Trust. He has published extensively, both academically and journalistically, and appears regularly in the international news media and gives talks around the world about his research. He has two forthcoming academic books with Routledge in 2020 including <i>The Alt-Right: Fascism for the 21st Century</i>.</p><p>For more information, visit <a href="https://www.datasociety.net">datasociety.net</a>.</p>
]]></content:encoded>
      <enclosure length="55005980" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/b9adbe7f-d32b-42cf-84a7-560f852da0c1/mulhallpodcast-timeline-mixdown_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Climate Change and Conspiracy: Networked Disinformation</itunes:title>
      <itunes:author>danah boyd, Joe Mulhall</itunes:author>
      <itunes:duration>00:57:13</itunes:duration>
      <itunes:summary>Joe Mulhall, senior researcher at European anti-extremism NGO HOPE not hate, explores how the international far-right is leveraging the current climate crisis, with a special focus on networked disinformation and exclusive new polling research conducted across six countries around the world.</itunes:summary>
      <itunes:subtitle>Joe Mulhall, senior researcher at European anti-extremism NGO HOPE not hate, explores how the international far-right is leveraging the current climate crisis, with a special focus on networked disinformation and exclusive new polling research conducted across six countries around the world.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>61</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-11-26t17:46:02+00:00-ea82ff52283f4c1</guid>
      <title>Black Software</title>
      <description><![CDATA[<p>Charlton McIlwain, author of &quot;Black Software: The Internet &amp; Racial Justice, from the AfroNet to Black Lives Matter,&quot; shares African Americans’ role in the internet’s creation and evolution, illuminating both the limits and possibilities for using digital technology to push for racial justice in the United States and across the globe. McIlwain's book shows that the story of racial justice movement organizing online is much longer and varied than most people know. In fact, it spans nearly five decades and involves a varied group of engineers, entrepreneurs, hobbyists, journalists, and activists. But this is a history that is virtually unknown, even in our current age of Google, Facebook, Twitter, and Black Lives Matter. From the 1960s to present, the book examines how computing technology has been used to neutralize the threat that black people pose to the existing racial order, but also how black people seized these new computing tools to build community, wealth, and wage a war for racial justice.</p>
<p>This event was hosted by Data &amp; Society Faculty Fellow Anita Say Chan.</p>
<p>Charlton McIlwain is Vice Provost of Faculty Engagement &amp; Development at New York University, and Professor of Media, Culture, and Communication at NYU’s Steinhardt School. Dr. McIlwain’s scholarly work focuses on the intersections of race, digital media, and racial justice activism. He is also the Founder of the Center for Critical Race &amp; Digital Studies, and in addition to &quot;Black Software: The Internet &amp; Racial Justice, From the AfroNet to Black Lives Matter&quot; (Oxford University Press), he is the co-author of the award-winning book, &quot;Race Appeal: How Political Candidates Invoke Race In U.S. Political Campaigns.&quot;</p>
]]></description>
      <pubDate>Wed, 4 Dec 2019 16:18:39 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Charlton McIlwain, author of &quot;Black Software: The Internet &amp; Racial Justice, from the AfroNet to Black Lives Matter,&quot; shares African Americans’ role in the internet’s creation and evolution, illuminating both the limits and possibilities for using digital technology to push for racial justice in the United States and across the globe. McIlwain's book shows that the story of racial justice movement organizing online is much longer and varied than most people know. In fact, it spans nearly five decades and involves a varied group of engineers, entrepreneurs, hobbyists, journalists, and activists. But this is a history that is virtually unknown, even in our current age of Google, Facebook, Twitter, and Black Lives Matter. From the 1960s to present, the book examines how computing technology has been used to neutralize the threat that black people pose to the existing racial order, but also how black people seized these new computing tools to build community, wealth, and wage a war for racial justice.</p>
<p>This event was hosted by Data &amp; Society Faculty Fellow Anita Say Chan.</p>
<p>Charlton McIlwain is Vice Provost of Faculty Engagement &amp; Development at New York University, and Professor of Media, Culture, and Communication at NYU’s Steinhardt School. Dr. McIlwain’s scholarly work focuses on the intersections of race, digital media, and racial justice activism. He is also the Founder of the Center for Critical Race &amp; Digital Studies, and in addition to &quot;Black Software: The Internet &amp; Racial Justice, From the AfroNet to Black Lives Matter&quot; (Oxford University Press), he is the co-author of the award-winning book, &quot;Race Appeal: How Political Candidates Invoke Race In U.S. Political Campaigns.&quot;</p>
]]></content:encoded>
      <enclosure length="46822339" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/a05a7c32-aa4e-425d-930c-3e7af8887425/ep0068_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Black Software</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:48:42</itunes:duration>
      <itunes:summary>Charlton McIlwain, author of &quot;Black Software: The Internet &amp; Racial Justice, from the AfroNet to Black Lives Matter,&quot; shares African Americans’ role in the internet’s creation and evolution, illuminating both the limits and possibilities for using digital technology to push for racial justice in the United States and across the globe. McIlwain&apos;s book shows that the story of racial justice movement organizing online is much longer and varied than most people know. In fact, it spans nearly five decades and involves a varied group of engineers, entrepreneurs, hobbyists, journalists, and activists. But this is a history that is virtually unknown, even in our current age of Google, Facebook, Twitter, and Black Lives Matter. From the 1960s to present, the book examines how computing technology has been used to neutralize the threat that black people pose to the existing racial order, but also how black people seized these new computing tools to build community, wealth, and wage a war for racial justice.

This event was hosted by Data &amp; Society Faculty Fellow Anita Say Chan.

Charlton McIlwain is Vice Provost of Faculty Engagement &amp; Development at New York University, and Professor of Media, Culture, and Communication at NYU’s Steinhardt School. Dr. McIlwain’s scholarly work focuses on the intersections of race, digital media, and racial justice activism. He is also the Founder of the Center for Critical Race &amp; Digital Studies, and in addition to &quot;Black Software: The Internet &amp; Racial Justice, From the AfroNet to Black Lives Matter&quot; (Oxford University Press), he is the co-author of the award-winning book, &quot;Race Appeal: How Political Candidates Invoke Race In U.S. Political Campaigns.&quot;</itunes:summary>
      <itunes:subtitle>Charlton McIlwain, author of &quot;Black Software: The Internet &amp; Racial Justice, from the AfroNet to Black Lives Matter,&quot; shares African Americans’ role in the internet’s creation and evolution, illuminating both the limits and possibilities for using digital technology to push for racial justice in the United States and across the globe. McIlwain&apos;s book shows that the story of racial justice movement organizing online is much longer and varied than most people know. In fact, it spans nearly five decades and involves a varied group of engineers, entrepreneurs, hobbyists, journalists, and activists. But this is a history that is virtually unknown, even in our current age of Google, Facebook, Twitter, and Black Lives Matter. From the 1960s to present, the book examines how computing technology has been used to neutralize the threat that black people pose to the existing racial order, but also how black people seized these new computing tools to build community, wealth, and wage a war for racial justice.

This event was hosted by Data &amp; Society Faculty Fellow Anita Say Chan.

Charlton McIlwain is Vice Provost of Faculty Engagement &amp; Development at New York University, and Professor of Media, Culture, and Communication at NYU’s Steinhardt School. Dr. McIlwain’s scholarly work focuses on the intersections of race, digital media, and racial justice activism. He is also the Founder of the Center for Critical Race &amp; Digital Studies, and in addition to &quot;Black Software: The Internet &amp; Racial Justice, From the AfroNet to Black Lives Matter&quot; (Oxford University Press), he is the co-author of the award-winning book, &quot;Race Appeal: How Political Candidates Invoke Race In U.S. Political Campaigns.&quot;</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>60</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-11-12t18:38:18+00:00-b9c348153edc3ef</guid>
      <title>Race After Technology</title>
      <description><![CDATA[<p>Ruha Benjamin discusses the relationship between machine bias and systemic racism, analyzing specific cases of “discriminatory design” and offering tools for a socially-conscious approach to tech development.  In &quot;Race After Technology: Abolitionist Tools for the New Jim Code,&quot; Ruha Benjamin cuts through tech-industry hype, from everyday apps to complex algorithms, to understand how emerging technologies can reinforce White supremacy and deepen social inequity. Presenting the concept of “the new Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite.<br />
This event is hosted by Data &amp; Society’s Director of Research Sareeta Amrute.</p>
]]></description>
      <pubDate>Tue, 26 Nov 2019 19:05:50 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Ruha Benjamin discusses the relationship between machine bias and systemic racism, analyzing specific cases of “discriminatory design” and offering tools for a socially-conscious approach to tech development.  In &quot;Race After Technology: Abolitionist Tools for the New Jim Code,&quot; Ruha Benjamin cuts through tech-industry hype, from everyday apps to complex algorithms, to understand how emerging technologies can reinforce White supremacy and deepen social inequity. Presenting the concept of “the new Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite.<br />
This event is hosted by Data &amp; Society’s Director of Research Sareeta Amrute.</p>
]]></content:encoded>
      <enclosure length="33749706" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/3f650174-e09a-4682-926a-7a99602e5df5/ep0067_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Race After Technology</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:35:01</itunes:duration>
      <itunes:summary>Ruha Benjamin discusses the relationship between machine bias and systemic racism, analyzing specific cases of “discriminatory design” and offering tools for a socially-conscious approach to tech development.  In &quot;Race After Technology: Abolitionist Tools for the New Jim Code,&quot; Ruha Benjamin cuts through tech-industry hype, from everyday apps to complex algorithms, to understand how emerging technologies can reinforce White supremacy and deepen social inequity. Presenting the concept of “the new Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite.
This event is hosted by Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:summary>
      <itunes:subtitle>Ruha Benjamin discusses the relationship between machine bias and systemic racism, analyzing specific cases of “discriminatory design” and offering tools for a socially-conscious approach to tech development.  In &quot;Race After Technology: Abolitionist Tools for the New Jim Code,&quot; Ruha Benjamin cuts through tech-industry hype, from everyday apps to complex algorithms, to understand how emerging technologies can reinforce White supremacy and deepen social inequity. Presenting the concept of “the new Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite.
This event is hosted by Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>59</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-07-09t20:36:43+00:00-46511009e28dbb4</guid>
      <title>Exposing Police Misconduct Data in the Era of Digital Privacy Concerns</title>
      <description><![CDATA[<p>This past year, 2018-2019 Data &amp; Society Fellow Cynthia Conti-Cook tackled an aspect of the criminal justice system lacking data: police misconduct. Her talk explores how this data gap came to be through police union claims to the Right to be Forgotten. This raises important lessons about how government actors exploit privacy rhetoric to cover up rights violations.</p>
<p>Cynthia Conti-Cook is a staff attorney at the New York City’s Legal Aid Society, Special Litigation Unit, where she oversees the Cop Accountability Project and Database, leads impact litigation and law reform projects on issues involving policing, data collection, risk assessment instruments, and the criminal justice system generally. She has presented as a panelist and trainer at many national, New York state, and New York City venues on topics of police misconduct, technology in the criminal justice system, and risk assessment instruments.</p>
]]></description>
      <pubDate>Tue, 23 Jul 2019 16:32:25 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>This past year, 2018-2019 Data &amp; Society Fellow Cynthia Conti-Cook tackled an aspect of the criminal justice system lacking data: police misconduct. Her talk explores how this data gap came to be through police union claims to the Right to be Forgotten. This raises important lessons about how government actors exploit privacy rhetoric to cover up rights violations.</p>
<p>Cynthia Conti-Cook is a staff attorney at the New York City’s Legal Aid Society, Special Litigation Unit, where she oversees the Cop Accountability Project and Database, leads impact litigation and law reform projects on issues involving policing, data collection, risk assessment instruments, and the criminal justice system generally. She has presented as a panelist and trainer at many national, New York state, and New York City venues on topics of police misconduct, technology in the criminal justice system, and risk assessment instruments.</p>
]]></content:encoded>
      <enclosure length="11283171" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/4c3cc69d-df87-49e5-b55e-4237dff3cd60/ep0066_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Exposing Police Misconduct Data in the Era of Digital Privacy Concerns</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:11:39</itunes:duration>
      <itunes:summary>This past year, 2018-2019 Data &amp; Society Fellow Cynthia Conti-Cook tackled an aspect of the criminal justice system lacking data: police misconduct. Her talk explores how this data gap came to be through police union claims to the Right to be Forgotten. This raises important lessons about how government actors exploit privacy rhetoric to cover up rights violations.

Cynthia Conti-Cook is a staff attorney at the New York City’s Legal Aid Society, Special Litigation Unit, where she oversees the Cop Accountability Project and Database, leads impact litigation and law reform projects on issues involving policing, data collection, risk assessment instruments, and the criminal justice system generally. She has presented as a panelist and trainer at many national, New York state, and New York City venues on topics of police misconduct, technology in the criminal justice system, and risk assessment instruments.</itunes:summary>
      <itunes:subtitle>This past year, 2018-2019 Data &amp; Society Fellow Cynthia Conti-Cook tackled an aspect of the criminal justice system lacking data: police misconduct. Her talk explores how this data gap came to be through police union claims to the Right to be Forgotten. This raises important lessons about how government actors exploit privacy rhetoric to cover up rights violations.

Cynthia Conti-Cook is a staff attorney at the New York City’s Legal Aid Society, Special Litigation Unit, where she oversees the Cop Accountability Project and Database, leads impact litigation and law reform projects on issues involving policing, data collection, risk assessment instruments, and the criminal justice system generally. She has presented as a panelist and trainer at many national, New York state, and New York City venues on topics of police misconduct, technology in the criminal justice system, and risk assessment instruments.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>58</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-06-18t14:26:42+00:00-b4d7b29c6662390</guid>
      <title>Why Now is the Time for Racial Literacy in Tech</title>
      <description><![CDATA[<p>2018-19 Data &amp; Society Fellow Jessie Daniels offers strategies for racial literacy in tech grounded in intellectual understanding, emotional intelligence, and a commitment to take action. In this podcast, Daniels describes how the biggest barrier to racial literacy in tech is &quot;thinking that race doesn't matter in tech.&quot; She argues that &quot;without racial literacy in tech, without a specific and conscious effort to address race, we will certainly be recreating a high-tech Jim Crow: a segregated, divided, unequal future, sped-up, spread out, and automated through algorithms, AI, and machine learning.&quot;</p>
<p>Jessie Daniels, PhD is a Professor at Hunter College (Sociology) and at The Graduate Center, CUNY (Africana Studies, Critical Social Psychology, and Sociology). She earned her PhD from the University of Texas-Austin and held a Charles Phelps Taft postdoctoral fellowship at University of Cincinnati. Her main area of interest is in race and digital media technologies; she is an internationally recognized expert on Internet manifestations of racism. Daniels is the author or editor of five books and has bylines at The New York Times, DAME, The Establishment, Entropy, and a regular column at Huffington Post.</p>
<p>Her recent paper, &quot;Advancing Racial Literacy in Tech,&quot; co-authored with 2018-19 Fellow Mutale Nkonde and 2017-18 Fellow Darakhshan Mir, can be found at http://www.racialliteracy.tech.</p>
]]></description>
      <pubDate>Tue, 9 Jul 2019 20:32:02 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>2018-19 Data &amp; Society Fellow Jessie Daniels offers strategies for racial literacy in tech grounded in intellectual understanding, emotional intelligence, and a commitment to take action. In this podcast, Daniels describes how the biggest barrier to racial literacy in tech is &quot;thinking that race doesn't matter in tech.&quot; She argues that &quot;without racial literacy in tech, without a specific and conscious effort to address race, we will certainly be recreating a high-tech Jim Crow: a segregated, divided, unequal future, sped-up, spread out, and automated through algorithms, AI, and machine learning.&quot;</p>
<p>Jessie Daniels, PhD is a Professor at Hunter College (Sociology) and at The Graduate Center, CUNY (Africana Studies, Critical Social Psychology, and Sociology). She earned her PhD from the University of Texas-Austin and held a Charles Phelps Taft postdoctoral fellowship at University of Cincinnati. Her main area of interest is in race and digital media technologies; she is an internationally recognized expert on Internet manifestations of racism. Daniels is the author or editor of five books and has bylines at The New York Times, DAME, The Establishment, Entropy, and a regular column at Huffington Post.</p>
<p>Her recent paper, &quot;Advancing Racial Literacy in Tech,&quot; co-authored with 2018-19 Fellow Mutale Nkonde and 2017-18 Fellow Darakhshan Mir, can be found at http://www.racialliteracy.tech.</p>
]]></content:encoded>
      <enclosure length="4447280" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/4af7e2ca-1a12-463f-b0b1-051019f71a1c/ep0064_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Why Now is the Time for Racial Literacy in Tech</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:12:21</itunes:duration>
      <itunes:summary>2018-19 Data &amp; Society Fellow Jessie Daniels offers strategies for racial literacy in tech grounded in intellectual understanding, emotional intelligence, and a commitment to take action. In this podcast, Daniels describes how the biggest barrier to racial literacy in tech is &quot;thinking that race doesn&apos;t matter in tech.&quot; She argues that &quot;without racial literacy in tech, without a specific and conscious effort to address race, we will certainly be recreating a high-tech Jim Crow: a segregated, divided, unequal future, sped-up, spread out, and automated through algorithms, AI, and machine learning.&quot; 

Jessie Daniels, PhD is a Professor at Hunter College (Sociology) and at The Graduate Center, CUNY (Africana Studies, Critical Social Psychology, and Sociology). She earned her PhD from the University of Texas-Austin and held a Charles Phelps Taft postdoctoral fellowship at University of Cincinnati. Her main area of interest is in race and digital media technologies; she is an internationally recognized expert on Internet manifestations of racism. Daniels is the author or editor of five books and has bylines at The New York Times, DAME, The Establishment, Entropy, and a regular column at Huffington Post.

Her recent paper, &quot;Advancing Racial Literacy in Tech,&quot; co-authored with 2018-19 Fellow Mutale Nkonde and 2017-18 Fellow Darakhshan Mir, can be found at http://www.racialliteracy.tech.</itunes:summary>
      <itunes:subtitle>2018-19 Data &amp; Society Fellow Jessie Daniels offers strategies for racial literacy in tech grounded in intellectual understanding, emotional intelligence, and a commitment to take action. In this podcast, Daniels describes how the biggest barrier to racial literacy in tech is &quot;thinking that race doesn&apos;t matter in tech.&quot; She argues that &quot;without racial literacy in tech, without a specific and conscious effort to address race, we will certainly be recreating a high-tech Jim Crow: a segregated, divided, unequal future, sped-up, spread out, and automated through algorithms, AI, and machine learning.&quot; 

Jessie Daniels, PhD is a Professor at Hunter College (Sociology) and at The Graduate Center, CUNY (Africana Studies, Critical Social Psychology, and Sociology). She earned her PhD from the University of Texas-Austin and held a Charles Phelps Taft postdoctoral fellowship at University of Cincinnati. Her main area of interest is in race and digital media technologies; she is an internationally recognized expert on Internet manifestations of racism. Daniels is the author or editor of five books and has bylines at The New York Times, DAME, The Establishment, Entropy, and a regular column at Huffington Post.

Her recent paper, &quot;Advancing Racial Literacy in Tech,&quot; co-authored with 2018-19 Fellow Mutale Nkonde and 2017-18 Fellow Darakhshan Mir, can be found at http://www.racialliteracy.tech.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>57</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-06-25t17:41:44+00:00-1f61ec70409a2b4</guid>
      <title>Cryptoparty as Rent Party</title>
      <description><![CDATA[<p>2018-19 Data &amp; Society Fellow Jasmine E. McNealy compares Cryptoparties to the goals and aspirations of the famous rent parties of the Harlem Renaissance. Both represent communities filling in the gaps in infrastructure to support each other. While the rent party helped pay rent through nights of celebration, jazz, and revelry, McNealy's research shows that the Cryptoparty strives for a similar freedom through educating community members on how to safely navigate harmful surveillance technologies.</p>
<p>Jasmine E. McNealy is an assistant professor of telecommunication at the University of Florida College of Journalism and Communications. She studies information, communication, and technology with a view toward influencing law and policy. Her research focuses on privacy, online media, communities, and culture.</p>
]]></description>
      <pubDate>Wed, 3 Jul 2019 13:36:03 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>2018-19 Data &amp; Society Fellow Jasmine E. McNealy compares Cryptoparties to the goals and aspirations of the famous rent parties of the Harlem Renaissance. Both represent communities filling in the gaps in infrastructure to support each other. While the rent party helped pay rent through nights of celebration, jazz, and revelry, McNealy's research shows that the Cryptoparty strives for a similar freedom through educating community members on how to safely navigate harmful surveillance technologies.</p>
<p>Jasmine E. McNealy is an assistant professor of telecommunication at the University of Florida College of Journalism and Communications. She studies information, communication, and technology with a view toward influencing law and policy. Her research focuses on privacy, online media, communities, and culture.</p>
]]></content:encoded>
      <enclosure length="14096036" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/7ce38c6c-73e4-4b87-a94f-05cf000c3c69/ep0065_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Cryptoparty as Rent Party</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:14:34</itunes:duration>
      <itunes:summary>2018-19 Data &amp; Society Fellow Jasmine E. McNealy compares Cryptoparties to the goals and aspirations of the famous rent parties of the Harlem Renaissance. Both represent communities filling in the gaps in infrastructure to support each other. While the rent party helped pay rent through nights of celebration, jazz, and revelry, McNealy&apos;s research shows that the Cryptoparty strives for a similar freedom through educating community members on how to safely navigate harmful surveillance technologies.

Jasmine E. McNealy is an assistant professor of telecommunication at the University of Florida College of Journalism and Communications. She studies information, communication, and technology with a view toward influencing law and policy. Her research focuses on privacy, online media, communities, and culture.</itunes:summary>
      <itunes:subtitle>2018-19 Data &amp; Society Fellow Jasmine E. McNealy compares Cryptoparties to the goals and aspirations of the famous rent parties of the Harlem Renaissance. Both represent communities filling in the gaps in infrastructure to support each other. While the rent party helped pay rent through nights of celebration, jazz, and revelry, McNealy&apos;s research shows that the Cryptoparty strives for a similar freedom through educating community members on how to safely navigate harmful surveillance technologies.

Jasmine E. McNealy is an assistant professor of telecommunication at the University of Florida College of Journalism and Communications. She studies information, communication, and technology with a view toward influencing law and policy. Her research focuses on privacy, online media, communities, and culture.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>56</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-06-10t20:26:00+00:00-c232d43abfb1717</guid>
      <title>Dark Patterns in Accessibility Tech</title>
      <description><![CDATA[<p>Chancey Fleet, a Brooklyn-based accessibility advocate, coordinates technology education programs at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library. Chancey was recognized as a 2017 &quot;Library Journal&quot; Mover and Shaker. She writes and presents to disability rights groups, policy-makers, and professionals about the intersections of disability and technology. During her fellowship at Data &amp; Society, she worked to advance public understanding of and explore best practices for visual interpreter services as well as other technologies for accessibility whose implications resonate with the broader global conversations about digital equity, data ethics, and privacy. She proudly serves as the Vice President of the National Federation of the Blind of New York.</p>
]]></description>
      <pubDate>Tue, 11 Jun 2019 16:59:47 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Chancey Fleet, a Brooklyn-based accessibility advocate, coordinates technology education programs at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library. Chancey was recognized as a 2017 &quot;Library Journal&quot; Mover and Shaker. She writes and presents to disability rights groups, policy-makers, and professionals about the intersections of disability and technology. During her fellowship at Data &amp; Society, she worked to advance public understanding of and explore best practices for visual interpreter services as well as other technologies for accessibility whose implications resonate with the broader global conversations about digital equity, data ethics, and privacy. She proudly serves as the Vice President of the National Federation of the Blind of New York.</p>
]]></content:encoded>
      <enclosure length="10591671" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/0534292b-8900-48a9-890a-6dc80f8ca3b9/ep0063_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Dark Patterns in Accessibility Tech</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:10:57</itunes:duration>
      <itunes:summary>Chancey Fleet, a Brooklyn-based accessibility advocate, coordinates technology education programs at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library. Chancey was recognized as a 2017 &quot;Library Journal&quot; Mover and Shaker. She writes and presents to disability rights groups, policy-makers, and professionals about the intersections of disability and technology. During her fellowship at Data &amp; Society, she worked to advance public understanding of and explore best practices for visual interpreter services as well as other technologies for accessibility whose implications resonate with the broader global conversations about digital equity, data ethics, and privacy. She proudly serves as the Vice President of the National Federation of the Blind of New York.</itunes:summary>
      <itunes:subtitle>Chancey Fleet, a Brooklyn-based accessibility advocate, coordinates technology education programs at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library. Chancey was recognized as a 2017 &quot;Library Journal&quot; Mover and Shaker. She writes and presents to disability rights groups, policy-makers, and professionals about the intersections of disability and technology. During her fellowship at Data &amp; Society, she worked to advance public understanding of and explore best practices for visual interpreter services as well as other technologies for accessibility whose implications resonate with the broader global conversations about digital equity, data ethics, and privacy. She proudly serves as the Vice President of the National Federation of the Blind of New York.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>55</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-05-13t19:03:30+00:00-3d591e123ab45d1</guid>
      <title>Ghost Work</title>
      <description><![CDATA[<p>Anthropologist Mary L. Gray shares her latest book, &quot;Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass,&quot; a collaboration with computer scientist Siddharth Suri. &quot;Ghost Work&quot; is a necessary and revelatory exposé of the invisible human workforce that powers the web—and that foreshadows the true future of work.</p>
<p>Hidden beneath the surface of the web, lost in our wrong-headed debates about AI, a new menace is looming. This book unveils how services delivered by companies like Amazon, Google, Microsoft, and Uber can only function smoothly thanks to the judgment and experience of a vast, invisible human labor force. These people doing “ghost work” make the internet seem smart. They perform high-tech piecework: flagging X-rated content, proofreading, designing engine parts, and much more. An estimated 8 percent of Americans have worked at least once in this “ghost economy,” and that number is growing. They usually earn less than legal minimums for traditional work, they have no health benefits, and they can be fired at any time for any reason, or none.</p>
<p>There are no labor laws to govern this kind of work, and these latter-day assembly lines draw in—and all too often overwork and underpay—a surprisingly diverse range of workers: harried young mothers, professionals forced into early retirement, recent grads who can’t get a toehold on the traditional employment ladder, and minorities shut out of the jobs they want. Gray and Suri also show how ghost workers, employers, and society at large can ensure that this new kind of work creates opportunity—rather than misery—for those who do it.</p>
<p>Amara.org co-founder Dean Jansen joins Mary in a conversation moderated by Data &amp; Society’s Director of Research Sareeta Amrute.</p>
]]></description>
      <pubDate>Tue, 14 May 2019 14:42:44 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Anthropologist Mary L. Gray shares her latest book, &quot;Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass,&quot; a collaboration with computer scientist Siddharth Suri. &quot;Ghost Work&quot; is a necessary and revelatory exposé of the invisible human workforce that powers the web—and that foreshadows the true future of work.</p>
<p>Hidden beneath the surface of the web, lost in our wrong-headed debates about AI, a new menace is looming. This book unveils how services delivered by companies like Amazon, Google, Microsoft, and Uber can only function smoothly thanks to the judgment and experience of a vast, invisible human labor force. These people doing “ghost work” make the internet seem smart. They perform high-tech piecework: flagging X-rated content, proofreading, designing engine parts, and much more. An estimated 8 percent of Americans have worked at least once in this “ghost economy,” and that number is growing. They usually earn less than legal minimums for traditional work, they have no health benefits, and they can be fired at any time for any reason, or none.</p>
<p>There are no labor laws to govern this kind of work, and these latter-day assembly lines draw in—and all too often overwork and underpay—a surprisingly diverse range of workers: harried young mothers, professionals forced into early retirement, recent grads who can’t get a toehold on the traditional employment ladder, and minorities shut out of the jobs they want. Gray and Suri also show how ghost workers, employers, and society at large can ensure that this new kind of work creates opportunity—rather than misery—for those who do it.</p>
<p>Amara.org co-founder Dean Jansen joins Mary in a conversation moderated by Data &amp; Society’s Director of Research Sareeta Amrute.</p>
]]></content:encoded>
      <enclosure length="56194701" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/b91a4fb2-42be-4e6a-8780-ac2d7098a6f0/ep0061_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Ghost Work</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:58:27</itunes:duration>
      <itunes:summary>Anthropologist Mary L. Gray shares her latest book, &quot;Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass,&quot; a collaboration with computer scientist Siddharth Suri. &quot;Ghost Work&quot; is a necessary and revelatory exposé of the invisible human workforce that powers the web—and that foreshadows the true future of work.

Hidden beneath the surface of the web, lost in our wrong-headed debates about AI, a new menace is looming. This book unveils how services delivered by companies like Amazon, Google, Microsoft, and Uber can only function smoothly thanks to the judgment and experience of a vast, invisible human labor force. These people doing “ghost work” make the internet seem smart. They perform high-tech piecework: flagging X-rated content, proofreading, designing engine parts, and much more. An estimated 8 percent of Americans have worked at least once in this “ghost economy,” and that number is growing. They usually earn less than legal minimums for traditional work, they have no health benefits, and they can be fired at any time for any reason, or none.

There are no labor laws to govern this kind of work, and these latter-day assembly lines draw in—and all too often overwork and underpay—a surprisingly diverse range of workers: harried young mothers, professionals forced into early retirement, recent grads who can’t get a toehold on the traditional employment ladder, and minorities shut out of the jobs they want. Gray and Suri also show how ghost workers, employers, and society at large can ensure that this new kind of work creates opportunity—rather than misery—for those who do it.

Amara.org co-founder Dean Jansen joins Mary in a conversation moderated by Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:summary>
      <itunes:subtitle>Anthropologist Mary L. Gray shares her latest book, &quot;Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass,&quot; a collaboration with computer scientist Siddharth Suri. &quot;Ghost Work&quot; is a necessary and revelatory exposé of the invisible human workforce that powers the web—and that foreshadows the true future of work.

Hidden beneath the surface of the web, lost in our wrong-headed debates about AI, a new menace is looming. This book unveils how services delivered by companies like Amazon, Google, Microsoft, and Uber can only function smoothly thanks to the judgment and experience of a vast, invisible human labor force. These people doing “ghost work” make the internet seem smart. They perform high-tech piecework: flagging X-rated content, proofreading, designing engine parts, and much more. An estimated 8 percent of Americans have worked at least once in this “ghost economy,” and that number is growing. They usually earn less than legal minimums for traditional work, they have no health benefits, and they can be fired at any time for any reason, or none.

There are no labor laws to govern this kind of work, and these latter-day assembly lines draw in—and all too often overwork and underpay—a surprisingly diverse range of workers: harried young mothers, professionals forced into early retirement, recent grads who can’t get a toehold on the traditional employment ladder, and minorities shut out of the jobs they want. Gray and Suri also show how ghost workers, employers, and society at large can ensure that this new kind of work creates opportunity—rather than misery—for those who do it.

Amara.org co-founder Dean Jansen joins Mary in a conversation moderated by Data &amp; Society’s Director of Research Sareeta Amrute.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>54</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-02-20t18:30:08+00:00-8a64cc8f40b76b1</guid>
      <title>Surveillance Capitalism and Democracy</title>
      <description><![CDATA[<p>Shoshana Zuboff–Surveillance capitalism arrived on the scene with democracy already on the ropes, its early life sheltered and nourished by neoliberalism’s claims to freedom that set it at a distance from the lives of people. Surveillance capitalists quickly learned to exploit the gathering momentum aimed at hollowing out democracy’s meaning and muscle. Despite the democratic promise of its rhetoric and capabilities, it contributed to a new Gilded Age of extreme wealth inequality, as well as to once-unimaginable new forms of economic exclusivity and new sources of social inequality that separate “the tuners” from “the tuned.”</p>
<p>Among the many insults to democracy and democratic institutions imposed by this coup des gens, Zuboff counts the unauthorized expropriation of private human experience; the hijack of the division of learning in society; the structural independence from people; the top-down imposition of the hive collective; the rise of instrumentarian power and radical indifference that together sustain its extractive logic; the construction, ownership, and operation of the means of behavior modification that is Big Other; the abrogation of the natural right to the future tense and the natural right to sanctuary; the degradation of the self-determining individual as the crucible of democratic life; and the insistence on psychic numbing as the answer to its illegitimate quid pro quo.</p>
<p>This event is hosted by Data &amp; Society’s AI on the Ground Research Lead Madeleine Clare Elish.</p>
]]></description>
      <pubDate>Wed, 20 Feb 2019 19:56:27 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Shoshana Zuboff–Surveillance capitalism arrived on the scene with democracy already on the ropes, its early life sheltered and nourished by neoliberalism’s claims to freedom that set it at a distance from the lives of people. Surveillance capitalists quickly learned to exploit the gathering momentum aimed at hollowing out democracy’s meaning and muscle. Despite the democratic promise of its rhetoric and capabilities, it contributed to a new Gilded Age of extreme wealth inequality, as well as to once-unimaginable new forms of economic exclusivity and new sources of social inequality that separate “the tuners” from “the tuned.”</p>
<p>Among the many insults to democracy and democratic institutions imposed by this coup des gens, Zuboff counts the unauthorized expropriation of private human experience; the hijack of the division of learning in society; the structural independence from people; the top-down imposition of the hive collective; the rise of instrumentarian power and radical indifference that together sustain its extractive logic; the construction, ownership, and operation of the means of behavior modification that is Big Other; the abrogation of the natural right to the future tense and the natural right to sanctuary; the degradation of the self-determining individual as the crucible of democratic life; and the insistence on psychic numbing as the answer to its illegitimate quid pro quo.</p>
<p>This event is hosted by Data &amp; Society’s AI on the Ground Research Lead Madeleine Clare Elish.</p>
]]></content:encoded>
      <enclosure length="46523563" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/28a45f54-a496-41c7-9711-7a446e0dbfaa/ep0060_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Surveillance Capitalism and Democracy</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:48:19</itunes:duration>
      <itunes:summary>Shoshana Zuboff–Surveillance capitalism arrived on the scene with democracy already on the ropes, its early life sheltered and nourished by neoliberalism’s claims to freedom that set it at a distance from the lives of people. Surveillance capitalists quickly learned to exploit the gathering momentum aimed at hollowing out democracy’s meaning and muscle. Despite the democratic promise of its rhetoric and capabilities, it contributed to a new Gilded Age of extreme wealth inequality, as well as to once-unimaginable new forms of economic exclusivity and new sources of social inequality that separate “the tuners” from “the tuned.”

Among the many insults to democracy and democratic institutions imposed by this coup des gens, Zuboff counts the unauthorized expropriation of private human experience; the hijack of the division of learning in society; the structural independence from people; the top-down imposition of the hive collective; the rise of instrumentarian power and radical indifference that together sustain its extractive logic; the construction, ownership, and operation of the means of behavior modification that is Big Other; the abrogation of the natural right to the future tense and the natural right to sanctuary; the degradation of the self-determining individual as the crucible of democratic life; and the insistence on psychic numbing as the answer to its illegitimate quid pro quo.

This event is hosted by Data &amp; Society’s AI on the Ground Research Lead Madeleine Clare Elish.</itunes:summary>
      <itunes:subtitle>Shoshana Zuboff–Surveillance capitalism arrived on the scene with democracy already on the ropes, its early life sheltered and nourished by neoliberalism’s claims to freedom that set it at a distance from the lives of people. Surveillance capitalists quickly learned to exploit the gathering momentum aimed at hollowing out democracy’s meaning and muscle. Despite the democratic promise of its rhetoric and capabilities, it contributed to a new Gilded Age of extreme wealth inequality, as well as to once-unimaginable new forms of economic exclusivity and new sources of social inequality that separate “the tuners” from “the tuned.”

Among the many insults to democracy and democratic institutions imposed by this coup des gens, Zuboff counts the unauthorized expropriation of private human experience; the hijack of the division of learning in society; the structural independence from people; the top-down imposition of the hive collective; the rise of instrumentarian power and radical indifference that together sustain its extractive logic; the construction, ownership, and operation of the means of behavior modification that is Big Other; the abrogation of the natural right to the future tense and the natural right to sanctuary; the degradation of the self-determining individual as the crucible of democratic life; and the insistence on psychic numbing as the answer to its illegitimate quid pro quo.

This event is hosted by Data &amp; Society’s AI on the Ground Research Lead Madeleine Clare Elish.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>53</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2019-01-15t15:18:54+00:00-91a3b7340f96ff2</guid>
      <title>Memes to Movements</title>
      <description><![CDATA[<p>An Xiao Mina presents a global exploration of internet memes as agents of pop culture, politics, protest, and propaganda on- and offline. Based on her new book, Memes to Movements: How the World’s Most Viral Media is Changing Social Protest and Power (Beacon Press, January 2019), Mina uses social media-driven movements to unpack the mechanics of memes and how they operate to reinforce, amplify, and shape today’s politics.</p>
<p>Crucially, Mina reveals how, in parts of the world where public dissent is downright dangerous, memes can belie contentious political opinions that would incur drastic consequences if expressed outright. She finds that the “silly” stuff of meme culture—the photo remixes, the selfies, the YouTube songs, and the pun-tastic hashtags—are fundamentally intertwined with how we find and affirm one another, direct attention to human rights and social justice issues, build narratives, and make culture.</p>
<p>Joining her in conversation is Data &amp; Society Founder and President danah boyd.</p>
]]></description>
      <pubDate>Wed, 16 Jan 2019 13:29:27 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>An Xiao Mina presents a global exploration of internet memes as agents of pop culture, politics, protest, and propaganda on- and offline. Based on her new book, Memes to Movements: How the World’s Most Viral Media is Changing Social Protest and Power (Beacon Press, January 2019), Mina uses social media-driven movements to unpack the mechanics of memes and how they operate to reinforce, amplify, and shape today’s politics.</p>
<p>Crucially, Mina reveals how, in parts of the world where public dissent is downright dangerous, memes can belie contentious political opinions that would incur drastic consequences if expressed outright. She finds that the “silly” stuff of meme culture—the photo remixes, the selfies, the YouTube songs, and the pun-tastic hashtags—are fundamentally intertwined with how we find and affirm one another, direct attention to human rights and social justice issues, build narratives, and make culture.</p>
<p>Joining her in conversation is Data &amp; Society Founder and President danah boyd.</p>
]]></content:encoded>
      <enclosure length="37262777" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/ec5e2aa3-8232-4dcb-90f2-a58ce4061d22/ep0059_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Memes to Movements</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:38:45</itunes:duration>
      <itunes:summary>An Xiao Mina presents a global exploration of internet memes as agents of pop culture, politics, protest, and propaganda on- and offline. Based on her new book, Memes to Movements: How the World’s Most Viral Media is Changing Social Protest and Power (Beacon Press, January 2019), Mina uses social media-driven movements to unpack the mechanics of memes and how they operate to reinforce, amplify, and shape today’s politics.

Crucially, Mina reveals how, in parts of the world where public dissent is downright dangerous, memes can belie contentious political opinions that would incur drastic consequences if expressed outright. She finds that the “silly” stuff of meme culture—the photo remixes, the selfies, the YouTube songs, and the pun-tastic hashtags—are fundamentally intertwined with how we find and affirm one another, direct attention to human rights and social justice issues, build narratives, and make culture.

Joining her in conversation is Data &amp; Society Founder and President danah boyd.</itunes:summary>
      <itunes:subtitle>An Xiao Mina presents a global exploration of internet memes as agents of pop culture, politics, protest, and propaganda on- and offline. Based on her new book, Memes to Movements: How the World’s Most Viral Media is Changing Social Protest and Power (Beacon Press, January 2019), Mina uses social media-driven movements to unpack the mechanics of memes and how they operate to reinforce, amplify, and shape today’s politics.

Crucially, Mina reveals how, in parts of the world where public dissent is downright dangerous, memes can belie contentious political opinions that would incur drastic consequences if expressed outright. She finds that the “silly” stuff of meme culture—the photo remixes, the selfies, the YouTube songs, and the pun-tastic hashtags—are fundamentally intertwined with how we find and affirm one another, direct attention to human rights and social justice issues, build narratives, and make culture.

Joining her in conversation is Data &amp; Society Founder and President danah boyd.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>52</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-12-10t15:23:44+00:00-763910ee5bcc966</guid>
      <title>Temp: How American Work, American Business, and the American Dream Became Temporary</title>
      <description><![CDATA[<p>Historian Louis Hyman on the surprising origins of the “gig economy.” Hyman is joined in conversation by Data &amp; Society’s Labor Engagement Lead Aiha Nguyen and Researcher Alex Rosenblat.</p>
<p>Hyman’s latest book &quot;Temp: How American Work, American Business, and the American Dream Became Temporary&quot; tracks the transformation of an ethos that favored long-term investment in work (and workers) to one promoting short-term returns. A series of deliberate decisions preceded the digital revolution, setting off the collapse of the postwar institutions that insulated us from volatility including big unions, big corporations, and powerful regulators.</p>
<p>Through the experiences of those on the inside–consultants and executives, temps and office workers, line workers and migrant laborers–Temp shows how the American Dream was unmade.</p>
]]></description>
      <pubDate>Tue, 11 Dec 2018 18:27:53 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Historian Louis Hyman on the surprising origins of the “gig economy.” Hyman is joined in conversation by Data &amp; Society’s Labor Engagement Lead Aiha Nguyen and Researcher Alex Rosenblat.</p>
<p>Hyman’s latest book &quot;Temp: How American Work, American Business, and the American Dream Became Temporary&quot; tracks the transformation of an ethos that favored long-term investment in work (and workers) to one promoting short-term returns. A series of deliberate decisions preceded the digital revolution, setting off the collapse of the postwar institutions that insulated us from volatility including big unions, big corporations, and powerful regulators.</p>
<p>Through the experiences of those on the inside–consultants and executives, temps and office workers, line workers and migrant laborers–Temp shows how the American Dream was unmade.</p>
]]></content:encoded>
      <enclosure length="46381304" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/9ef8ef83-a4de-41c5-95f0-025d6ca200a5/ep0058_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Temp: How American Work, American Business, and the American Dream Became Temporary</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:48:19</itunes:duration>
      <itunes:summary>Historian Louis Hyman on the surprising origins of the “gig economy.” Hyman is joined in conversation by Data &amp; Society’s Labor Engagement Lead Aiha Nguyen and Researcher Alex Rosenblat.

Hyman’s latest book &quot;Temp: How American Work, American Business, and the American Dream Became Temporary&quot; tracks the transformation of an ethos that favored long-term investment in work (and workers) to one promoting short-term returns. A series of deliberate decisions preceded the digital revolution, setting off the collapse of the postwar institutions that insulated us from volatility including big unions, big corporations, and powerful regulators.

Through the experiences of those on the inside–consultants and executives, temps and office workers, line workers and migrant laborers–Temp shows how the American Dream was unmade.</itunes:summary>
      <itunes:subtitle>Historian Louis Hyman on the surprising origins of the “gig economy.” Hyman is joined in conversation by Data &amp; Society’s Labor Engagement Lead Aiha Nguyen and Researcher Alex Rosenblat.

Hyman’s latest book &quot;Temp: How American Work, American Business, and the American Dream Became Temporary&quot; tracks the transformation of an ethos that favored long-term investment in work (and workers) to one promoting short-term returns. A series of deliberate decisions preceded the digital revolution, setting off the collapse of the postwar institutions that insulated us from volatility including big unions, big corporations, and powerful regulators.

Through the experiences of those on the inside–consultants and executives, temps and office workers, line workers and migrant laborers–Temp shows how the American Dream was unmade.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>51</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-10-09t21:37:31+00:00-1e6dff2bdc0d8fc</guid>
      <title>Redefining Benefits for Future Workers</title>
      <description><![CDATA[<p>Data &amp; Society welcomes The Workers Lab Co-Founder and CEO Carmen Rojas; Entrepreneur and Author Rachel Schneider; and Professor, Researcher, and Activist Tamara K. Nopper to discuss the intersection of fintech and credit and benefit systems for low-wage workers with Data &amp; Society Labor Engagement Lead Aiha Nguyen.</p>
<p>Rojas and Schneider both focus on the financial challenges facing precarious low-wage workers–including “gig” workers–and how these workers might need different benefits than have traditionally been provided, like retirement. Nopper offers insight into the world of credit scoring and data, analyzing how fintech “innovation” intersects with race, class, and gender wealth gaps. Nguyen is an organizer who works to bridge research and practice, expanding understanding of technological systems’ impact on work. Together, they discuss questions such as:</p>
<p>How will current and projected income volatility in the gig economy change available workplace benefits?<br />
What role could fintech play on the future of work? Can workers be a part of shaping that future?<br />
What data will low-income working families need to share in order to have access to capital–and will it be worth it?</p>
<p>Dr. Carmen Rojas is the Co-Founder and CEO of The Workers Lab, an organization that invests in experiments and innovation to build power for working people in the 21st century. For more than 20 years, Carmen has worked with foundations, financial institutions, and non-profits to improve the lives of working people across the United States. Carmen currently sits on the boards of the Marguerite Casey Foundation, Neighborhood Funders Group, General Service Foundation, JOLT, Certification Associates, and on the Advisory Boards of Fund Good Jobs and Floodgate Academy. Carmen holds a Ph.D. in City and Regional Planning from the University of California, Berkeley and was a Fulbright Scholar in 2007.</p>
<p>Rachel Schneider is the Omidyar Network Entrepreneur-in-Residence at the Aspen Institute Financial Security Program and co-author of The Financial Diaries: How American Families Cope in a World of Uncertainty. Rachel’s research has been featured in the nation’s top publications, including the New York Times, Wall Street Journal and many others. Though she began her career as an investment banker at Merrill Lynch &amp; Co., Rachel credits her commitment to the potential for innovative finance to solve major social problems from her days as a VISTA Volunteer (now AmeriCorps). She holds a J.D./M.B.A. from the University of Chicago, and a B.A. from UC Berkeley.</p>
<p>Tamara K. Nopper has a PhD in Sociology and her teaching and research focuses on the intersection of economic, racial, and gender inequality, with a particular emphasis on entrepreneurship, banking, globalization, urban development, and money and surveillance. Her publications have examined immigrant entrepreneurship, minority business development, the globalization of ethnic banking, and Asian American communities. Her current work looks at Korean immigrant entrepreneurship and post-Civil Rights era minority politics.</p>
]]></description>
      <pubDate>Wed, 10 Oct 2018 15:36:24 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Data &amp; Society welcomes The Workers Lab Co-Founder and CEO Carmen Rojas; Entrepreneur and Author Rachel Schneider; and Professor, Researcher, and Activist Tamara K. Nopper to discuss the intersection of fintech and credit and benefit systems for low-wage workers with Data &amp; Society Labor Engagement Lead Aiha Nguyen.</p>
<p>Rojas and Schneider both focus on the financial challenges facing precarious low-wage workers–including “gig” workers–and how these workers might need different benefits than have traditionally been provided, like retirement. Nopper offers insight into the world of credit scoring and data, analyzing how fintech “innovation” intersects with race, class, and gender wealth gaps. Nguyen is an organizer who works to bridge research and practice, expanding understanding of technological systems’ impact on work. Together, they discuss questions such as:</p>
<p>How will current and projected income volatility in the gig economy change available workplace benefits?<br />
What role could fintech play on the future of work? Can workers be a part of shaping that future?<br />
What data will low-income working families need to share in order to have access to capital–and will it be worth it?</p>
<p>Dr. Carmen Rojas is the Co-Founder and CEO of The Workers Lab, an organization that invests in experiments and innovation to build power for working people in the 21st century. For more than 20 years, Carmen has worked with foundations, financial institutions, and non-profits to improve the lives of working people across the United States. Carmen currently sits on the boards of the Marguerite Casey Foundation, Neighborhood Funders Group, General Service Foundation, JOLT, Certification Associates, and on the Advisory Boards of Fund Good Jobs and Floodgate Academy. Carmen holds a Ph.D. in City and Regional Planning from the University of California, Berkeley and was a Fulbright Scholar in 2007.</p>
<p>Rachel Schneider is the Omidyar Network Entrepreneur-in-Residence at the Aspen Institute Financial Security Program and co-author of The Financial Diaries: How American Families Cope in a World of Uncertainty. Rachel’s research has been featured in the nation’s top publications, including the New York Times, Wall Street Journal and many others. Though she began her career as an investment banker at Merrill Lynch &amp; Co., Rachel credits her commitment to the potential for innovative finance to solve major social problems from her days as a VISTA Volunteer (now AmeriCorps). She holds a J.D./M.B.A. from the University of Chicago, and a B.A. from UC Berkeley.</p>
<p>Tamara K. Nopper has a PhD in Sociology and her teaching and research focuses on the intersection of economic, racial, and gender inequality, with a particular emphasis on entrepreneurship, banking, globalization, urban development, and money and surveillance. Her publications have examined immigrant entrepreneurship, minority business development, the globalization of ethnic banking, and Asian American communities. Her current work looks at Korean immigrant entrepreneurship and post-Civil Rights era minority politics.</p>
]]></content:encoded>
      <enclosure length="43456096" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/1d6dedc5-103b-4932-86a0-17be2d20db8d/ep0057_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Redefining Benefits for Future Workers</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:45:16</itunes:duration>
      <itunes:summary>Data &amp; Society welcomes The Workers Lab Co-Founder and CEO Carmen Rojas; Entrepreneur and Author Rachel Schneider; and Professor, Researcher, and Activist Tamara K. Nopper to discuss the intersection of fintech and credit and benefit systems for low-wage workers with Data &amp; Society Labor Engagement Lead Aiha Nguyen.

Rojas and Schneider both focus on the financial challenges facing precarious low-wage workers–including “gig” workers–and how these workers might need different benefits than have traditionally been provided, like retirement. Nopper offers insight into the world of credit scoring and data, analyzing how fintech “innovation” intersects with race, class, and gender wealth gaps. Nguyen is an organizer who works to bridge research and practice, expanding understanding of technological systems’ impact on work. Together, they discuss questions such as:

How will current and projected income volatility in the gig economy change available workplace benefits?
What role could fintech play on the future of work? Can workers be a part of shaping that future?
What data will low-income working families need to share in order to have access to capital–and will it be worth it?

Dr. Carmen Rojas is the Co-Founder and CEO of The Workers Lab, an organization that invests in experiments and innovation to build power for working people in the 21st century. For more than 20 years, Carmen has worked with foundations, financial institutions, and non-profits to improve the lives of working people across the United States. Carmen currently sits on the boards of the Marguerite Casey Foundation, Neighborhood Funders Group, General Service Foundation, JOLT, Certification Associates, and on the Advisory Boards of Fund Good Jobs and Floodgate Academy. Carmen holds a Ph.D. in City and Regional Planning from the University of California, Berkeley and was a Fulbright Scholar in 2007.

Rachel Schneider is the Omidyar Network Entrepreneur-in-Residence at the Aspen Institute Financial Security Program and co-author of The Financial Diaries: How American Families Cope in a World of Uncertainty. Rachel’s research has been featured in the nation’s top publications, including the New York Times, Wall Street Journal and many others. Though she began her career as an investment banker at Merrill Lynch &amp; Co., Rachel credits her commitment to the potential for innovative finance to solve major social problems from her days as a VISTA Volunteer (now AmeriCorps). She holds a J.D./M.B.A. from the University of Chicago, and a B.A. from UC Berkeley.

Tamara K. Nopper has a PhD in Sociology and her teaching and research focuses on the intersection of economic, racial, and gender inequality, with a particular emphasis on entrepreneurship, banking, globalization, urban development, and money and surveillance. Her publications have examined immigrant entrepreneurship, minority business development, the globalization of ethnic banking, and Asian American communities. Her current work looks at Korean immigrant entrepreneurship and post-Civil Rights era minority politics.</itunes:summary>
      <itunes:subtitle>Data &amp; Society welcomes The Workers Lab Co-Founder and CEO Carmen Rojas; Entrepreneur and Author Rachel Schneider; and Professor, Researcher, and Activist Tamara K. Nopper to discuss the intersection of fintech and credit and benefit systems for low-wage workers with Data &amp; Society Labor Engagement Lead Aiha Nguyen.

Rojas and Schneider both focus on the financial challenges facing precarious low-wage workers–including “gig” workers–and how these workers might need different benefits than have traditionally been provided, like retirement. Nopper offers insight into the world of credit scoring and data, analyzing how fintech “innovation” intersects with race, class, and gender wealth gaps. Nguyen is an organizer who works to bridge research and practice, expanding understanding of technological systems’ impact on work. Together, they discuss questions such as:

How will current and projected income volatility in the gig economy change available workplace benefits?
What role could fintech play on the future of work? Can workers be a part of shaping that future?
What data will low-income working families need to share in order to have access to capital–and will it be worth it?

Dr. Carmen Rojas is the Co-Founder and CEO of The Workers Lab, an organization that invests in experiments and innovation to build power for working people in the 21st century. For more than 20 years, Carmen has worked with foundations, financial institutions, and non-profits to improve the lives of working people across the United States. Carmen currently sits on the boards of the Marguerite Casey Foundation, Neighborhood Funders Group, General Service Foundation, JOLT, Certification Associates, and on the Advisory Boards of Fund Good Jobs and Floodgate Academy. Carmen holds a Ph.D. in City and Regional Planning from the University of California, Berkeley and was a Fulbright Scholar in 2007.

Rachel Schneider is the Omidyar Network Entrepreneur-in-Residence at the Aspen Institute Financial Security Program and co-author of The Financial Diaries: How American Families Cope in a World of Uncertainty. Rachel’s research has been featured in the nation’s top publications, including the New York Times, Wall Street Journal and many others. Though she began her career as an investment banker at Merrill Lynch &amp; Co., Rachel credits her commitment to the potential for innovative finance to solve major social problems from her days as a VISTA Volunteer (now AmeriCorps). She holds a J.D./M.B.A. from the University of Chicago, and a B.A. from UC Berkeley.

Tamara K. Nopper has a PhD in Sociology and her teaching and research focuses on the intersection of economic, racial, and gender inequality, with a particular emphasis on entrepreneurship, banking, globalization, urban development, and money and surveillance. Her publications have examined immigrant entrepreneurship, minority business development, the globalization of ethnic banking, and Asian American communities. Her current work looks at Korean immigrant entrepreneurship and post-Civil Rights era minority politics.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>52</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-10-01t20:31:09+00:00-3e5e1c58974e890</guid>
      <title>Freedom in Moderation: Platforms, Press, and the Public</title>
      <description><![CDATA[<p>Data &amp; Society welcomes Mike Ananny and Tarleton Gillespie for a conversation with Kate Klonick about the underlying decisions that impact the public’s access to media systems and internet platforms.</p>
<p>In &quot;Networked Press Freedom: Creating Infrastructures for a Public Right to Hear,&quot; Mike Ananny offers a new way to think about freedom of the press in a time when media systems are in fundamental flux. Seeing press freedom as essential for democratic self-governance, Ananny explores what publics need, what kind of free press they should demand, and how today’s press freedom emerges from intertwined collections of humans and machines. His book proposes what robust, self-governing publics need to demand of technologists and journalists alike.</p>
<p>Tarleton Gillespie’s &quot;Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media&quot; investigates how social media platforms police what we post online—and the way these decisions shape public discourse, cultural production, and the fabric of society. Gillespie provides an overview of current social media practices and explains the underlying rationales for how, when, and why “content moderators” censor or promote user-posted content. The book then flips the way we think about moderation, to argue that content moderation is not ancillary to what platforms do, it is essential, definitional, constitutional. And given that, the very fact of moderation should change how we understand what platforms are.</p>
<p>Mike Ananny is an associate professor of communication and journalism in the Annenberg School at the University of Southern California (USC), a faculty affiliate with USC’s Science, Technology, and Society initiative, and a 2018-19 Berggruen Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University.</p>
<p>Tarleton Gillespie is a principal researcher at Microsoft Research New England and an affiliated associate professor at Cornell University. He co-founded the blog Culture Digitally. His previous book is the award-winning &quot;Wired Shut: Copyright and the Shape of Digital Culture.&quot;</p>
<p>Kate Klonick is an assistant professor at law at St. John’s University Law School and an affiliate at the Information Society Project at Yale Law School, Data &amp; Society, and New America. Her work on networked technologies’ effect on the areas of social norm enforcement, torts, property, intellectual property, artificial intelligence, robotics, freedom of expression, and governance has appeared in the Harvard Law Review, Maryland Law Review, New York Times, The Atlantic, Slate, The Guardian and numerous other publications.</p>
]]></description>
      <pubDate>Wed, 3 Oct 2018 17:04:37 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Data &amp; Society welcomes Mike Ananny and Tarleton Gillespie for a conversation with Kate Klonick about the underlying decisions that impact the public’s access to media systems and internet platforms.</p>
<p>In &quot;Networked Press Freedom: Creating Infrastructures for a Public Right to Hear,&quot; Mike Ananny offers a new way to think about freedom of the press in a time when media systems are in fundamental flux. Seeing press freedom as essential for democratic self-governance, Ananny explores what publics need, what kind of free press they should demand, and how today’s press freedom emerges from intertwined collections of humans and machines. His book proposes what robust, self-governing publics need to demand of technologists and journalists alike.</p>
<p>Tarleton Gillespie’s &quot;Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media&quot; investigates how social media platforms police what we post online—and the way these decisions shape public discourse, cultural production, and the fabric of society. Gillespie provides an overview of current social media practices and explains the underlying rationales for how, when, and why “content moderators” censor or promote user-posted content. The book then flips the way we think about moderation, to argue that content moderation is not ancillary to what platforms do, it is essential, definitional, constitutional. And given that, the very fact of moderation should change how we understand what platforms are.</p>
<p>Mike Ananny is an associate professor of communication and journalism in the Annenberg School at the University of Southern California (USC), a faculty affiliate with USC’s Science, Technology, and Society initiative, and a 2018-19 Berggruen Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University.</p>
<p>Tarleton Gillespie is a principal researcher at Microsoft Research New England and an affiliated associate professor at Cornell University. He co-founded the blog Culture Digitally. His previous book is the award-winning &quot;Wired Shut: Copyright and the Shape of Digital Culture.&quot;</p>
<p>Kate Klonick is an assistant professor at law at St. John’s University Law School and an affiliate at the Information Society Project at Yale Law School, Data &amp; Society, and New America. Her work on networked technologies’ effect on the areas of social norm enforcement, torts, property, intellectual property, artificial intelligence, robotics, freedom of expression, and governance has appeared in the Harvard Law Review, Maryland Law Review, New York Times, The Atlantic, Slate, The Guardian and numerous other publications.</p>
]]></content:encoded>
      <enclosure length="39724974" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/283aa888-b872-4cf0-a042-bd2ff3e7c861/ep0056_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Freedom in Moderation: Platforms, Press, and the Public</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:41:23</itunes:duration>
      <itunes:summary>Data &amp; Society welcomes Mike Ananny and Tarleton Gillespie for a conversation with Kate Klonick about the underlying decisions that impact the public’s access to media systems and internet platforms.

In &quot;Networked Press Freedom: Creating Infrastructures for a Public Right to Hear,&quot; Mike Ananny offers a new way to think about freedom of the press in a time when media systems are in fundamental flux. Seeing press freedom as essential for democratic self-governance, Ananny explores what publics need, what kind of free press they should demand, and how today’s press freedom emerges from intertwined collections of humans and machines. His book proposes what robust, self-governing publics need to demand of technologists and journalists alike.

Tarleton Gillespie’s &quot;Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media&quot; investigates how social media platforms police what we post online—and the way these decisions shape public discourse, cultural production, and the fabric of society. Gillespie provides an overview of current social media practices and explains the underlying rationales for how, when, and why “content moderators” censor or promote user-posted content. The book then flips the way we think about moderation, to argue that content moderation is not ancillary to what platforms do, it is essential, definitional, constitutional. And given that, the very fact of moderation should change how we understand what platforms are.

Mike Ananny is an associate professor of communication and journalism in the Annenberg School at the University of Southern California (USC), a faculty affiliate with USC’s Science, Technology, and Society initiative, and a 2018-19 Berggruen Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University.

Tarleton Gillespie is a principal researcher at Microsoft Research New England and an affiliated associate professor at Cornell University. He co-founded the blog Culture Digitally. His previous book is the award-winning &quot;Wired Shut: Copyright and the Shape of Digital Culture.&quot;

Kate Klonick is an assistant professor at law at St. John’s University Law School and an affiliate at the Information Society Project at Yale Law School, Data &amp; Society, and New America. Her work on networked technologies’ effect on the areas of social norm enforcement, torts, property, intellectual property, artificial intelligence, robotics, freedom of expression, and governance has appeared in the Harvard Law Review, Maryland Law Review, New York Times, The Atlantic, Slate, The Guardian and numerous other publications.</itunes:summary>
      <itunes:subtitle>Data &amp; Society welcomes Mike Ananny and Tarleton Gillespie for a conversation with Kate Klonick about the underlying decisions that impact the public’s access to media systems and internet platforms.

In &quot;Networked Press Freedom: Creating Infrastructures for a Public Right to Hear,&quot; Mike Ananny offers a new way to think about freedom of the press in a time when media systems are in fundamental flux. Seeing press freedom as essential for democratic self-governance, Ananny explores what publics need, what kind of free press they should demand, and how today’s press freedom emerges from intertwined collections of humans and machines. His book proposes what robust, self-governing publics need to demand of technologists and journalists alike.

Tarleton Gillespie’s &quot;Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media&quot; investigates how social media platforms police what we post online—and the way these decisions shape public discourse, cultural production, and the fabric of society. Gillespie provides an overview of current social media practices and explains the underlying rationales for how, when, and why “content moderators” censor or promote user-posted content. The book then flips the way we think about moderation, to argue that content moderation is not ancillary to what platforms do, it is essential, definitional, constitutional. And given that, the very fact of moderation should change how we understand what platforms are.

Mike Ananny is an associate professor of communication and journalism in the Annenberg School at the University of Southern California (USC), a faculty affiliate with USC’s Science, Technology, and Society initiative, and a 2018-19 Berggruen Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University.

Tarleton Gillespie is a principal researcher at Microsoft Research New England and an affiliated associate professor at Cornell University. He co-founded the blog Culture Digitally. His previous book is the award-winning &quot;Wired Shut: Copyright and the Shape of Digital Culture.&quot;

Kate Klonick is an assistant professor at law at St. John’s University Law School and an affiliate at the Information Society Project at Yale Law School, Data &amp; Society, and New America. Her work on networked technologies’ effect on the areas of social norm enforcement, torts, property, intellectual property, artificial intelligence, robotics, freedom of expression, and governance has appeared in the Harvard Law Review, Maryland Law Review, New York Times, The Atlantic, Slate, The Guardian and numerous other publications.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>51</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-09-24t20:16:43+00:00-b51654d3d648c8b</guid>
      <title>Gigged: The End of the Job and the Future of Work</title>
      <description><![CDATA[<p>Journalist Sarah Kessler discusses her new book &quot;Gigged: The End of the Job and the Future of Work.&quot; Kessler shares her analysis of the perils and promises of the platform gig economy in conversation with Data &amp; Society’s Alex Rosenblat, researcher and author of the forthcoming book &quot;Uberland: How Algorithms Are Rewriting the Rules of Work&quot; (October 23, 2018) and Aiha Nguyen, Social Instabilities in Labor Futures Engagement Lead.</p>
<p>One in three American workers is now a freelancer. This “gig economy”―one that provides neither the guarantee of steady hours nor benefits―emerged out of the digital era and has revolutionized the way we do business. High-profile tech start-ups such as Uber and Airbnb are constantly making headlines for the “disruption” they cause to the industries they overturn.</p>
<p>But “disruption” introduces new challenges to employees and job-seekers who seek to navigate platform policies, ensure workplace safety, and hedge against instability. Join us for a timely discussion on the quest to find meaningful, well-paid work as technology increasingly destabilizes and transforms the future of labor.</p>
<p>Sarah Kessler is a journalist based in New York City. She is the author of Gigged: The End of the Job and the Future of Work and an editor at Quartz. Previously, she covered the gig economy as a senior writer at Fast Company and managed startup coverage at Mashable. Her reporting has been cited by The Washington Post, New York Magazine, and NPR.</p>
<p>The Future of Labor research initiative at Data &amp; Society seeks to better understand emergent disruptions in the labor force as a result of data-centric technological development, with a special focus on structural inequalities. Its team recently released the report Beyond Disruption: How Tech Shapes Labor Across Domestic Work &amp; Ridehailing–as featured in the New York Times, NPR All Things Considered, and The Nation.</p>
]]></description>
      <pubDate>Tue, 25 Sep 2018 18:53:53 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Journalist Sarah Kessler discusses her new book &quot;Gigged: The End of the Job and the Future of Work.&quot; Kessler shares her analysis of the perils and promises of the platform gig economy in conversation with Data &amp; Society’s Alex Rosenblat, researcher and author of the forthcoming book &quot;Uberland: How Algorithms Are Rewriting the Rules of Work&quot; (October 23, 2018) and Aiha Nguyen, Social Instabilities in Labor Futures Engagement Lead.</p>
<p>One in three American workers is now a freelancer. This “gig economy”―one that provides neither the guarantee of steady hours nor benefits―emerged out of the digital era and has revolutionized the way we do business. High-profile tech start-ups such as Uber and Airbnb are constantly making headlines for the “disruption” they cause to the industries they overturn.</p>
<p>But “disruption” introduces new challenges to employees and job-seekers who seek to navigate platform policies, ensure workplace safety, and hedge against instability. Join us for a timely discussion on the quest to find meaningful, well-paid work as technology increasingly destabilizes and transforms the future of labor.</p>
<p>Sarah Kessler is a journalist based in New York City. She is the author of Gigged: The End of the Job and the Future of Work and an editor at Quartz. Previously, she covered the gig economy as a senior writer at Fast Company and managed startup coverage at Mashable. Her reporting has been cited by The Washington Post, New York Magazine, and NPR.</p>
<p>The Future of Labor research initiative at Data &amp; Society seeks to better understand emergent disruptions in the labor force as a result of data-centric technological development, with a special focus on structural inequalities. Its team recently released the report Beyond Disruption: How Tech Shapes Labor Across Domestic Work &amp; Ridehailing–as featured in the New York Times, NPR All Things Considered, and The Nation.</p>
]]></content:encoded>
      <enclosure length="38956765" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/8177ec4b-94e0-4620-9de7-28c7a2ad7931/ep0055_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Gigged: The End of the Job and the Future of Work</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:40:33</itunes:duration>
      <itunes:summary>Journalist Sarah Kessler discusses her new book &quot;Gigged: The End of the Job and the Future of Work.&quot; Kessler shares her analysis of the perils and promises of the platform gig economy in conversation with Data &amp; Society’s Alex Rosenblat, researcher and author of the forthcoming book &quot;Uberland: How Algorithms Are Rewriting the Rules of Work&quot; (October 23, 2018) and Aiha Nguyen, Social Instabilities in Labor Futures Engagement Lead.

One in three American workers is now a freelancer. This “gig economy”―one that provides neither the guarantee of steady hours nor benefits―emerged out of the digital era and has revolutionized the way we do business. High-profile tech start-ups such as Uber and Airbnb are constantly making headlines for the “disruption” they cause to the industries they overturn.

But “disruption” introduces new challenges to employees and job-seekers who seek to navigate platform policies, ensure workplace safety, and hedge against instability. Join us for a timely discussion on the quest to find meaningful, well-paid work as technology increasingly destabilizes and transforms the future of labor.

Sarah Kessler is a journalist based in New York City. She is the author of Gigged: The End of the Job and the Future of Work and an editor at Quartz. Previously, she covered the gig economy as a senior writer at Fast Company and managed startup coverage at Mashable. Her reporting has been cited by The Washington Post, New York Magazine, and NPR.

The Future of Labor research initiative at Data &amp; Society seeks to better understand emergent disruptions in the labor force as a result of data-centric technological development, with a special focus on structural inequalities. Its team recently released the report Beyond Disruption: How Tech Shapes Labor Across Domestic Work &amp; Ridehailing–as featured in the New York Times, NPR All Things Considered, and The Nation.</itunes:summary>
      <itunes:subtitle>Journalist Sarah Kessler discusses her new book &quot;Gigged: The End of the Job and the Future of Work.&quot; Kessler shares her analysis of the perils and promises of the platform gig economy in conversation with Data &amp; Society’s Alex Rosenblat, researcher and author of the forthcoming book &quot;Uberland: How Algorithms Are Rewriting the Rules of Work&quot; (October 23, 2018) and Aiha Nguyen, Social Instabilities in Labor Futures Engagement Lead.

One in three American workers is now a freelancer. This “gig economy”―one that provides neither the guarantee of steady hours nor benefits―emerged out of the digital era and has revolutionized the way we do business. High-profile tech start-ups such as Uber and Airbnb are constantly making headlines for the “disruption” they cause to the industries they overturn.

But “disruption” introduces new challenges to employees and job-seekers who seek to navigate platform policies, ensure workplace safety, and hedge against instability. Join us for a timely discussion on the quest to find meaningful, well-paid work as technology increasingly destabilizes and transforms the future of labor.

Sarah Kessler is a journalist based in New York City. She is the author of Gigged: The End of the Job and the Future of Work and an editor at Quartz. Previously, she covered the gig economy as a senior writer at Fast Company and managed startup coverage at Mashable. Her reporting has been cited by The Washington Post, New York Magazine, and NPR.

The Future of Labor research initiative at Data &amp; Society seeks to better understand emergent disruptions in the labor force as a result of data-centric technological development, with a special focus on structural inequalities. Its team recently released the report Beyond Disruption: How Tech Shapes Labor Across Domestic Work &amp; Ridehailing–as featured in the New York Times, NPR All Things Considered, and The Nation.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>50</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-07-09t20:01:41+00:00-224a8d0c12e9423</guid>
      <title>Online Speech Regulation: A Comparative Perspective</title>
      <description><![CDATA[<p>Claudia Haupt discusses competing frameworks for regulating speech on the web. Claudia Haupt is a 2017-18 Data &amp; Society Fellow and a resident fellow with the Information Society Project at Yale Law School. She previously taught at Columbia Law School and George Washington University Law School. Prior to that, she clerked at the Regional Court of Appeals of Cologne and practiced law at the Cologne office of the law firm of Graf von Westphalen, with a focus on information technology law.</p>
]]></description>
      <pubDate>Tue, 31 Jul 2018 19:57:02 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Claudia Haupt discusses competing frameworks for regulating speech on the web. Claudia Haupt is a 2017-18 Data &amp; Society Fellow and a resident fellow with the Information Society Project at Yale Law School. She previously taught at Columbia Law School and George Washington University Law School. Prior to that, she clerked at the Regional Court of Appeals of Cologne and practiced law at the Cologne office of the law firm of Graf von Westphalen, with a focus on information technology law.</p>
]]></content:encoded>
      <enclosure length="13687553" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/4215beac-e016-4d8a-994d-c2de0b61fd82/ep0052_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Online Speech Regulation: A Comparative Perspective</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:14:12</itunes:duration>
      <itunes:summary>Claudia Haupt discusses competing frameworks for regulating speech on the web. Claudia Haupt is a 2017-18 Data &amp; Society Fellow and a resident fellow with the Information Society Project at Yale Law School. She previously taught at Columbia Law School and George Washington University Law School. Prior to that, she clerked at the Regional Court of Appeals of Cologne and practiced law at the Cologne office of the law firm of Graf von Westphalen, with a focus on information technology law.</itunes:summary>
      <itunes:subtitle>Claudia Haupt discusses competing frameworks for regulating speech on the web. Claudia Haupt is a 2017-18 Data &amp; Society Fellow and a resident fellow with the Information Society Project at Yale Law School. She previously taught at Columbia Law School and George Washington University Law School. Prior to that, she clerked at the Regional Court of Appeals of Cologne and practiced law at the Cologne office of the law firm of Graf von Westphalen, with a focus on information technology law.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>49</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-07-09t19:46:23+00:00-6b66a58f366de2f</guid>
      <title>Data Science Ethics</title>
      <description><![CDATA[<p>Matthew L. Jones speaks about key illiteracies surrounding metadata, the hacking of our court system, and the possibility of ethics at scale. Jones is a 2017-2018 Data &amp; Society Fellow who studies the history of science and technology, with a focus on early modern Europe and on recent information technologies. He is completing a book on computing and state surveillance of communications and is working on a historical and ethnographic account of big data, its relation to statistics and machine learning, and its growth as a fundamental new form of technical expertise. Jones is currently a James R. Barker Professor of Contemporary Civilization at Columbia University’s Department of History.</p>
]]></description>
      <pubDate>Tue, 10 Jul 2018 19:00:00 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Matthew L. Jones speaks about key illiteracies surrounding metadata, the hacking of our court system, and the possibility of ethics at scale. Jones is a 2017-2018 Data &amp; Society Fellow who studies the history of science and technology, with a focus on early modern Europe and on recent information technologies. He is completing a book on computing and state surveillance of communications and is working on a historical and ethnographic account of big data, its relation to statistics and machine learning, and its growth as a fundamental new form of technical expertise. Jones is currently a James R. Barker Professor of Contemporary Civilization at Columbia University’s Department of History.</p>
]]></content:encoded>
      <enclosure length="16041816" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/df05e778-f0d9-4d84-a858-783930991eff/ep0051_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Data Science Ethics</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:16:40</itunes:duration>
      <itunes:summary>Matthew L. Jones speaks about key illiteracies surrounding metadata, the hacking of our court system, and the possibility of ethics at scale. Jones is a 2017-2018 Data &amp; Society Fellow who studies the history of science and technology, with a focus on early modern Europe and on recent information technologies. He is completing a book on computing and state surveillance of communications and is working on a historical and ethnographic account of big data, its relation to statistics and machine learning, and its growth as a fundamental new form of technical expertise. Jones is currently a James R. Barker Professor of Contemporary Civilization at Columbia University’s Department of History.</itunes:summary>
      <itunes:subtitle>Matthew L. Jones speaks about key illiteracies surrounding metadata, the hacking of our court system, and the possibility of ethics at scale. Jones is a 2017-2018 Data &amp; Society Fellow who studies the history of science and technology, with a focus on early modern Europe and on recent information technologies. He is completing a book on computing and state surveillance of communications and is working on a historical and ethnographic account of big data, its relation to statistics and machine learning, and its growth as a fundamental new form of technical expertise. Jones is currently a James R. Barker Professor of Contemporary Civilization at Columbia University’s Department of History.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>48</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-06-19t15:05:58+00:00-7bacb9903526495</guid>
      <title>Alternative Data, Credit Scoring, and Financial Freedom</title>
      <description><![CDATA[<p>Tamara K. Nopper's talk at Future Perfect explains how credit agencies such as FICO use narratives of credit as personal responsibility to justify increased data surveillance of consumers. Reasoning that sources of &quot;alternative data&quot; such as social network usage are a response to discriminatory practices, these agencies are selling financial freedom at the cost of racial injustice.</p>
<p>Future Perfect is a gathering at Data &amp; Society that brings together individuals from a variety of world-building disciplines (from art and fiction to architecture and science) to explore the uses, abuses, and paradoxes of speculative futures.</p>
<p>Tamara K. Nopper has a PhD in Sociology and her teaching and research focuses on the intersection of economic, racial, and gender inequality, with a particular emphasis on entrepreneurship, banking, globalization, urban development, and money and surveillance. Her publications have examined immigrant entrepreneurship, minority business development, the globalization of ethnic banking, and Asian American communities. Her current work looks at Korean immigrant entrepreneurship and post-Civil Rights era minority politics.</p>
]]></description>
      <pubDate>Tue, 19 Jun 2018 16:16:12 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Tamara K. Nopper's talk at Future Perfect explains how credit agencies such as FICO use narratives of credit as personal responsibility to justify increased data surveillance of consumers. Reasoning that sources of &quot;alternative data&quot; such as social network usage are a response to discriminatory practices, these agencies are selling financial freedom at the cost of racial injustice.</p>
<p>Future Perfect is a gathering at Data &amp; Society that brings together individuals from a variety of world-building disciplines (from art and fiction to architecture and science) to explore the uses, abuses, and paradoxes of speculative futures.</p>
<p>Tamara K. Nopper has a PhD in Sociology and her teaching and research focuses on the intersection of economic, racial, and gender inequality, with a particular emphasis on entrepreneurship, banking, globalization, urban development, and money and surveillance. Her publications have examined immigrant entrepreneurship, minority business development, the globalization of ethnic banking, and Asian American communities. Her current work looks at Korean immigrant entrepreneurship and post-Civil Rights era minority politics.</p>
]]></content:encoded>
      <enclosure length="19929859" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/9ba8fc3e-e134-4b1c-bb5e-07b49437fb86/ep0050_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Alternative Data, Credit Scoring, and Financial Freedom</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:20:42</itunes:duration>
      <itunes:summary>Tamara K. Nopper&apos;s talk at Future Perfect explains how credit agencies such as FICO use narratives of credit as personal responsibility to justify increased data surveillance of consumers. Reasoning that sources of &quot;alternative data&quot; such as social network usage are a response to discriminatory practices, these agencies are selling financial freedom at the cost of racial injustice.

Future Perfect is a gathering at Data &amp; Society that brings together individuals from a variety of world-building disciplines (from art and fiction to architecture and science) to explore the uses, abuses, and paradoxes of speculative futures. 

Tamara K. Nopper has a PhD in Sociology and her teaching and research focuses on the intersection of economic, racial, and gender inequality, with a particular emphasis on entrepreneurship, banking, globalization, urban development, and money and surveillance. Her publications have examined immigrant entrepreneurship, minority business development, the globalization of ethnic banking, and Asian American communities. Her current work looks at Korean immigrant entrepreneurship and post-Civil Rights era minority politics.</itunes:summary>
      <itunes:subtitle>Tamara K. Nopper&apos;s talk at Future Perfect explains how credit agencies such as FICO use narratives of credit as personal responsibility to justify increased data surveillance of consumers. Reasoning that sources of &quot;alternative data&quot; such as social network usage are a response to discriminatory practices, these agencies are selling financial freedom at the cost of racial injustice.

Future Perfect is a gathering at Data &amp; Society that brings together individuals from a variety of world-building disciplines (from art and fiction to architecture and science) to explore the uses, abuses, and paradoxes of speculative futures. 

Tamara K. Nopper has a PhD in Sociology and her teaching and research focuses on the intersection of economic, racial, and gender inequality, with a particular emphasis on entrepreneurship, banking, globalization, urban development, and money and surveillance. Her publications have examined immigrant entrepreneurship, minority business development, the globalization of ethnic banking, and Asian American communities. Her current work looks at Korean immigrant entrepreneurship and post-Civil Rights era minority politics.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>47</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-05-22t21:32:35+00:00-80549c00d078556</guid>
      <title>Algorithms of Oppression</title>
      <description><![CDATA[<p>In &quot;Algorithms of Oppression&quot;, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.</p>
<p>Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance.</p>
]]></description>
      <pubDate>Tue, 29 May 2018 19:54:40 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In &quot;Algorithms of Oppression&quot;, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.</p>
<p>Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance.</p>
]]></content:encoded>
      <enclosure length="39057363" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/065760ff-0bf2-4be0-9a52-90c0d64c669d/ep0049_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Algorithms of Oppression</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:40:36</itunes:duration>
      <itunes:summary>In &quot;Algorithms of Oppression&quot;, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.

Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance.</itunes:summary>
      <itunes:subtitle>In &quot;Algorithms of Oppression&quot;, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.

Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>46</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-05-15t23:03:03+00:00-a40df6d37456c9e</guid>
      <title>SCL and Cambridge Analytica: Peering Inside the Propaganda Machine</title>
      <description><![CDATA[<p>Cambridge Analytica and their parent company SCL Group hit the headlines recently when, after their work on the Trump campaign, reporting exposed misuse of Facebook data linked them to ‘Brexit’, unethical conduct in international elections, and revealed their relationship to defense contracting.</p>
<p>Dr. Emma Briant has spent over a decade researching SCL and Cambridge Analytica. She drew on substantial contacts she developed in her work on defense propaganda (Propaganda and Counter-terrorism: Strategies for Global Change, Manchester University Press, 2015) to research an upcoming book, What’s Wrong with the Democrats? Media Bias, Inequality, and the Rise of Donald Trump (co-authored with George Washington University professor Robert M. Entman) and academic publications on the EU referendum.</p>
<p>In this talk, Briant discusses her analysis of the company’s activities in each of these areas, how she gained such unique access to key executives who worked on the campaigns, as well as the implications of the key evidence she recently submitted to several public inquiries in the UK.</p>
]]></description>
      <pubDate>Wed, 16 May 2018 14:21:48 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Cambridge Analytica and their parent company SCL Group hit the headlines recently when, after their work on the Trump campaign, reporting exposed misuse of Facebook data linked them to ‘Brexit’, unethical conduct in international elections, and revealed their relationship to defense contracting.</p>
<p>Dr. Emma Briant has spent over a decade researching SCL and Cambridge Analytica. She drew on substantial contacts she developed in her work on defense propaganda (Propaganda and Counter-terrorism: Strategies for Global Change, Manchester University Press, 2015) to research an upcoming book, What’s Wrong with the Democrats? Media Bias, Inequality, and the Rise of Donald Trump (co-authored with George Washington University professor Robert M. Entman) and academic publications on the EU referendum.</p>
<p>In this talk, Briant discusses her analysis of the company’s activities in each of these areas, how she gained such unique access to key executives who worked on the campaigns, as well as the implications of the key evidence she recently submitted to several public inquiries in the UK.</p>
]]></content:encoded>
      <enclosure length="43990232" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/1fc1dbea-e954-40ad-b75a-60a3afc92a46/ep0048_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>SCL and Cambridge Analytica: Peering Inside the Propaganda Machine</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:45:45</itunes:duration>
      <itunes:summary>Cambridge Analytica and their parent company SCL Group hit the headlines recently when, after their work on the Trump campaign, reporting exposed misuse of Facebook data linked them to ‘Brexit’, unethical conduct in international elections, and revealed their relationship to defense contracting.

Dr. Emma Briant has spent over a decade researching SCL and Cambridge Analytica. She drew on substantial contacts she developed in her work on defense propaganda (Propaganda and Counter-terrorism: Strategies for Global Change, Manchester University Press, 2015) to research an upcoming book, What’s Wrong with the Democrats? Media Bias, Inequality, and the Rise of Donald Trump (co-authored with George Washington University professor Robert M. Entman) and academic publications on the EU referendum.

In this talk, Briant discusses her analysis of the company’s activities in each of these areas, how she gained such unique access to key executives who worked on the campaigns, as well as the implications of the key evidence she recently submitted to several public inquiries in the UK.</itunes:summary>
      <itunes:subtitle>Cambridge Analytica and their parent company SCL Group hit the headlines recently when, after their work on the Trump campaign, reporting exposed misuse of Facebook data linked them to ‘Brexit’, unethical conduct in international elections, and revealed their relationship to defense contracting.

Dr. Emma Briant has spent over a decade researching SCL and Cambridge Analytica. She drew on substantial contacts she developed in her work on defense propaganda (Propaganda and Counter-terrorism: Strategies for Global Change, Manchester University Press, 2015) to research an upcoming book, What’s Wrong with the Democrats? Media Bias, Inequality, and the Rise of Donald Trump (co-authored with George Washington University professor Robert M. Entman) and academic publications on the EU referendum.

In this talk, Briant discusses her analysis of the company’s activities in each of these areas, how she gained such unique access to key executives who worked on the campaigns, as well as the implications of the key evidence she recently submitted to several public inquiries in the UK.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>45</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-03-05t19:35:18+00:00-700eec45eef4405</guid>
      <title>Real Talk About Fake News</title>
      <description><![CDATA[<p>Real Talk about Fake News | Nabiha Syed in conversation with Claire Wardle and Joan Donovan: &quot;Fake news” isn’t exactly new: Tabloids have long hawked alien baby photos and Elvis sightings. Many have thus argued that fake news—propaganda, misinformation, and conspiracy theories—have always existed, and therefore requires no new consideration.</p>
<p>When we agonize over the fake news phenomenon, though, we are not talking about these kinds of fabricated stories. What we are really focusing on is why we have been suddenly inundated by false information—purposefully deployed—that spreads so quickly and persuades so effectively. This is a different conception of fake news, and it presents a question about how information operates at scale in the internet era.</p>
<p>In this Databite talk, Nabiha Syed explores how existing First Amendment theories fail to adequately explain our digital information economy, and how that theoretical incoherence leaves users and social media platforms ill-equipped to deal with “fake news” and other “bad” speech online. Nabiha also offers several factors to be considered in any systemic theory that can help move us beyond the troubled status quo.</p>
]]></description>
      <pubDate>Tue, 6 Mar 2018 21:32:33 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Real Talk about Fake News | Nabiha Syed in conversation with Claire Wardle and Joan Donovan: &quot;Fake news” isn’t exactly new: Tabloids have long hawked alien baby photos and Elvis sightings. Many have thus argued that fake news—propaganda, misinformation, and conspiracy theories—have always existed, and therefore requires no new consideration.</p>
<p>When we agonize over the fake news phenomenon, though, we are not talking about these kinds of fabricated stories. What we are really focusing on is why we have been suddenly inundated by false information—purposefully deployed—that spreads so quickly and persuades so effectively. This is a different conception of fake news, and it presents a question about how information operates at scale in the internet era.</p>
<p>In this Databite talk, Nabiha Syed explores how existing First Amendment theories fail to adequately explain our digital information economy, and how that theoretical incoherence leaves users and social media platforms ill-equipped to deal with “fake news” and other “bad” speech online. Nabiha also offers several factors to be considered in any systemic theory that can help move us beyond the troubled status quo.</p>
]]></content:encoded>
      <enclosure length="39530443" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/ed7518f0-3943-4f72-bca9-c3db20add983/ep0047_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Real Talk About Fake News</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:41:07</itunes:duration>
      <itunes:summary>Real Talk about Fake News | Nabiha Syed in conversation with Claire Wardle and Joan Donovan: &quot;Fake news” isn’t exactly new: Tabloids have long hawked alien baby photos and Elvis sightings. Many have thus argued that fake news—propaganda, misinformation, and conspiracy theories—have always existed, and therefore requires no new consideration.

When we agonize over the fake news phenomenon, though, we are not talking about these kinds of fabricated stories. What we are really focusing on is why we have been suddenly inundated by false information—purposefully deployed—that spreads so quickly and persuades so effectively. This is a different conception of fake news, and it presents a question about how information operates at scale in the internet era.

In this Databite talk, Nabiha Syed explores how existing First Amendment theories fail to adequately explain our digital information economy, and how that theoretical incoherence leaves users and social media platforms ill-equipped to deal with “fake news” and other “bad” speech online. Nabiha also offers several factors to be considered in any systemic theory that can help move us beyond the troubled status quo.</itunes:summary>
      <itunes:subtitle>Real Talk about Fake News | Nabiha Syed in conversation with Claire Wardle and Joan Donovan: &quot;Fake news” isn’t exactly new: Tabloids have long hawked alien baby photos and Elvis sightings. Many have thus argued that fake news—propaganda, misinformation, and conspiracy theories—have always existed, and therefore requires no new consideration.

When we agonize over the fake news phenomenon, though, we are not talking about these kinds of fabricated stories. What we are really focusing on is why we have been suddenly inundated by false information—purposefully deployed—that spreads so quickly and persuades so effectively. This is a different conception of fake news, and it presents a question about how information operates at scale in the internet era.

In this Databite talk, Nabiha Syed explores how existing First Amendment theories fail to adequately explain our digital information economy, and how that theoretical incoherence leaves users and social media platforms ill-equipped to deal with “fake news” and other “bad” speech online. Nabiha also offers several factors to be considered in any systemic theory that can help move us beyond the troubled status quo.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>44</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2018-01-22t19:24:38+00:00-c86af1c70db3bdb</guid>
      <title>Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor</title>
      <description><![CDATA[<p>Virginia Eubanks speaks about her most recent book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Eubanks systematically shows the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. The book is full of heart-wrenching and eye-opening stories, from a woman in Indiana whose benefits are literally cut off as she lays dying to a family in Pennsylvania in daily fear of losing their daughter because they fit a certain statistical profile.</p>
<p>The U.S. has always used its most cutting-edge science and technology to contain, investigate, discipline and punish the destitute. Like the county poorhouse and scientific charity before them, digital tracking and automated decision-making hide poverty from the middle-class public and give the nation the ethical distance it needs to make inhuman choices: which families get food and which starve, who has housing and who remains homeless, and which families are broken up by the state. In the process, they weaken democracy and betray our most cherished national values.</p>
]]></description>
      <pubDate>Wed, 24 Jan 2018 15:18:43 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Virginia Eubanks speaks about her most recent book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Eubanks systematically shows the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. The book is full of heart-wrenching and eye-opening stories, from a woman in Indiana whose benefits are literally cut off as she lays dying to a family in Pennsylvania in daily fear of losing their daughter because they fit a certain statistical profile.</p>
<p>The U.S. has always used its most cutting-edge science and technology to contain, investigate, discipline and punish the destitute. Like the county poorhouse and scientific charity before them, digital tracking and automated decision-making hide poverty from the middle-class public and give the nation the ethical distance it needs to make inhuman choices: which families get food and which starve, who has housing and who remains homeless, and which families are broken up by the state. In the process, they weaken democracy and betray our most cherished national values.</p>
]]></content:encoded>
      <enclosure length="21028031" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/e5ff986f-f146-42ac-93a7-a141ac31ce65/ep0046_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:21:51</itunes:duration>
      <itunes:summary>Virginia Eubanks speaks about her most recent book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Eubanks systematically shows the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. The book is full of heart-wrenching and eye-opening stories, from a woman in Indiana whose benefits are literally cut off as she lays dying to a family in Pennsylvania in daily fear of losing their daughter because they fit a certain statistical profile.

The U.S. has always used its most cutting-edge science and technology to contain, investigate, discipline and punish the destitute. Like the county poorhouse and scientific charity before them, digital tracking and automated decision-making hide poverty from the middle-class public and give the nation the ethical distance it needs to make inhuman choices: which families get food and which starve, who has housing and who remains homeless, and which families are broken up by the state. In the process, they weaken democracy and betray our most cherished national values.</itunes:summary>
      <itunes:subtitle>Virginia Eubanks speaks about her most recent book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Eubanks systematically shows the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. The book is full of heart-wrenching and eye-opening stories, from a woman in Indiana whose benefits are literally cut off as she lays dying to a family in Pennsylvania in daily fear of losing their daughter because they fit a certain statistical profile.

The U.S. has always used its most cutting-edge science and technology to contain, investigate, discipline and punish the destitute. Like the county poorhouse and scientific charity before them, digital tracking and automated decision-making hide poverty from the middle-class public and give the nation the ethical distance it needs to make inhuman choices: which families get food and which starve, who has housing and who remains homeless, and which families are broken up by the state. In the process, they weaken democracy and betray our most cherished national values.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>43</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-12-12t01:51:39+00:00-90f4d5cbdff178b</guid>
      <title>Regulating informational infrastructure: Internet platforms as the new public utilities</title>
      <description><![CDATA[<p>K Sabeel Rahman-The informational, economic, and political influence of the dominant tech platforms — Google, Facebook, and Amazon in particular — has become a central topic of debate. K. Sabeel Rahman argues that these firms are best understood as the core infrastructure of our 21st century economy and public sphere. The infrastructural power of these firms raises a range of policy questions. What exactly about these firms (e.g., their accumulation of data, their gatekeeping functions, their control over vital public and economic functions like retail delivery or online speech) is “infrastructural?” How should these infrastructural functions be governed and regulated, in light of both their economic and political influence?</p>
]]></description>
      <pubDate>Sat, 30 Dec 2017 05:05:52 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>K Sabeel Rahman-The informational, economic, and political influence of the dominant tech platforms — Google, Facebook, and Amazon in particular — has become a central topic of debate. K. Sabeel Rahman argues that these firms are best understood as the core infrastructure of our 21st century economy and public sphere. The infrastructural power of these firms raises a range of policy questions. What exactly about these firms (e.g., their accumulation of data, their gatekeeping functions, their control over vital public and economic functions like retail delivery or online speech) is “infrastructural?” How should these infrastructural functions be governed and regulated, in light of both their economic and political influence?</p>
]]></content:encoded>
      <enclosure length="24021797" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/e67c1265-111c-41b1-be2b-c294bb1e1384/ep0045_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Regulating informational infrastructure: Internet platforms as the new public utilities</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:24:57</itunes:duration>
      <itunes:summary>K Sabeel Rahman-The informational, economic, and political influence of the dominant tech platforms — Google, Facebook, and Amazon in particular — has become a central topic of debate. K. Sabeel Rahman argues that these firms are best understood as the core infrastructure of our 21st century economy and public sphere. The infrastructural power of these firms raises a range of policy questions. What exactly about these firms (e.g., their accumulation of data, their gatekeeping functions, their control over vital public and economic functions like retail delivery or online speech) is “infrastructural?” How should these infrastructural functions be governed and regulated, in light of both their economic and political influence?</itunes:summary>
      <itunes:subtitle>K Sabeel Rahman-The informational, economic, and political influence of the dominant tech platforms — Google, Facebook, and Amazon in particular — has become a central topic of debate. K. Sabeel Rahman argues that these firms are best understood as the core infrastructure of our 21st century economy and public sphere. The infrastructural power of these firms raises a range of policy questions. What exactly about these firms (e.g., their accumulation of data, their gatekeeping functions, their control over vital public and economic functions like retail delivery or online speech) is “infrastructural?” How should these infrastructural functions be governed and regulated, in light of both their economic and political influence?</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>42</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-11-07t01:26:55+00:00-29488832026de2b</guid>
      <title>The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement</title>
      <description><![CDATA[<p>Andrew Guthrie Ferguson discusses his book, The Rise of Big Data Policing, that critically examines data-driven surveillance technologies and their legal impact on everyday policing. Andrew Guthrie Ferguson is professor of law at the University of the District of Columbia's David A. Clarke School of Law. He is a national expert on predictive policing, big data surveillance, and the fourth amendment.</p>
]]></description>
      <pubDate>Tue, 7 Nov 2017 16:00:43 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Andrew Guthrie Ferguson discusses his book, The Rise of Big Data Policing, that critically examines data-driven surveillance technologies and their legal impact on everyday policing. Andrew Guthrie Ferguson is professor of law at the University of the District of Columbia's David A. Clarke School of Law. He is a national expert on predictive policing, big data surveillance, and the fourth amendment.</p>
]]></content:encoded>
      <enclosure length="33128905" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/76d2c393-493a-427b-bb69-8868ef2aae5a/ep0044_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:34:27</itunes:duration>
      <itunes:summary>Andrew Guthrie Ferguson discusses his book, The Rise of Big Data Policing, that critically examines data-driven surveillance technologies and their legal impact on everyday policing. Andrew Guthrie Ferguson is professor of law at the University of the District of Columbia&apos;s David A. Clarke School of Law. He is a national expert on predictive policing, big data surveillance, and the fourth amendment.</itunes:summary>
      <itunes:subtitle>Andrew Guthrie Ferguson discusses his book, The Rise of Big Data Policing, that critically examines data-driven surveillance technologies and their legal impact on everyday policing. Andrew Guthrie Ferguson is professor of law at the University of the District of Columbia&apos;s David A. Clarke School of Law. He is a national expert on predictive policing, big data surveillance, and the fourth amendment.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>41</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-10-18t02:55:49+00:00-e6bd1da1e2643c6</guid>
      <title>WTF: What’s the Future and Why It’s Up to Us</title>
      <description><![CDATA[<p>WTF? can be an expression of amazement or an expression of dismay. In today’s economy, we have far too much dismay along with our amazement, and technology bears some of the blame. In this combination of memoir, business strategy guide, and call to action, Tim O’Reilly, Silicon Valley’s leading intellectual and the founder of O’Reilly Media, explores the upside and the potential downsides of today’s WTF? technologies.</p>
]]></description>
      <pubDate>Tue, 7 Nov 2017 15:50:35 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>WTF? can be an expression of amazement or an expression of dismay. In today’s economy, we have far too much dismay along with our amazement, and technology bears some of the blame. In this combination of memoir, business strategy guide, and call to action, Tim O’Reilly, Silicon Valley’s leading intellectual and the founder of O’Reilly Media, explores the upside and the potential downsides of today’s WTF? technologies.</p>
]]></content:encoded>
      <enclosure length="27545633" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/9014dac3-e500-46f7-9ac8-7e32e7035ae5/ep0043_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>WTF: What’s the Future and Why It’s Up to Us</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:28:38</itunes:duration>
      <itunes:summary>WTF? can be an expression of amazement or an expression of dismay. In today’s economy, we have far too much dismay along with our amazement, and technology bears some of the blame. In this combination of memoir, business strategy guide, and call to action, Tim O’Reilly, Silicon Valley’s leading intellectual and the founder of O’Reilly Media, explores the upside and the potential downsides of today’s WTF? technologies.</itunes:summary>
      <itunes:subtitle>WTF? can be an expression of amazement or an expression of dismay. In today’s economy, we have far too much dismay along with our amazement, and technology bears some of the blame. In this combination of memoir, business strategy guide, and call to action, Tim O’Reilly, Silicon Valley’s leading intellectual and the founder of O’Reilly Media, explores the upside and the potential downsides of today’s WTF? technologies.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>40</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-09-13t06:35:05+00:00-3703235c005a418</guid>
      <title>The New Governors: The People, Rules, and Processes Governing Online Speech</title>
      <description><![CDATA[<p>Kate Klonick talks about her recent article, &quot;The New Governors: The People, Rules, and Processes Governing Online Speech&quot;, which provides one of the first analysis of what private online platforms are actually doing to moderate speech under a regulatory and First Amendment framework. It argues that to best understand online speech, we must abandon traditional doctrinal and regulatory analogies, and understand these private content platforms as systems of governance operating outside the boundaries of the First Amendment.  Kate is currently a doctoral candidate at Yale Law School.</p>
]]></description>
      <pubDate>Wed, 13 Sep 2017 15:22:54 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Kate Klonick talks about her recent article, &quot;The New Governors: The People, Rules, and Processes Governing Online Speech&quot;, which provides one of the first analysis of what private online platforms are actually doing to moderate speech under a regulatory and First Amendment framework. It argues that to best understand online speech, we must abandon traditional doctrinal and regulatory analogies, and understand these private content platforms as systems of governance operating outside the boundaries of the First Amendment.  Kate is currently a doctoral candidate at Yale Law School.</p>
]]></content:encoded>
      <enclosure length="19413901" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/721357a9-e94d-41c6-93c9-83c8ee3637d5/ep0042_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>The New Governors: The People, Rules, and Processes Governing Online Speech</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:20:09</itunes:duration>
      <itunes:summary>Kate Klonick talks about her recent article, &quot;The New Governors: The People, Rules, and Processes Governing Online Speech&quot;, which provides one of the first analysis of what private online platforms are actually doing to moderate speech under a regulatory and First Amendment framework. It argues that to best understand online speech, we must abandon traditional doctrinal and regulatory analogies, and understand these private content platforms as systems of governance operating outside the boundaries of the First Amendment.  Kate is currently a doctoral candidate at Yale Law School.</itunes:summary>
      <itunes:subtitle>Kate Klonick talks about her recent article, &quot;The New Governors: The People, Rules, and Processes Governing Online Speech&quot;, which provides one of the first analysis of what private online platforms are actually doing to moderate speech under a regulatory and First Amendment framework. It argues that to best understand online speech, we must abandon traditional doctrinal and regulatory analogies, and understand these private content platforms as systems of governance operating outside the boundaries of the First Amendment.  Kate is currently a doctoral candidate at Yale Law School.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>39</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-06-26t22:15:56+00:00-9ee9acd8ea50982</guid>
      <title>Digital Dystopias: How Michael Crichton Taught Me To Start Worrying And Fear The Future</title>
      <description><![CDATA[<p>Joanna Radin discusses the writing of Jurassic Park author Michael Crichton. Although Crichton is most famous for imagining an island of dinosaurs, metaphors in his non-fiction articles about computers that are even more terrifying.</p>
<p>This talk was presented for the event Future Perfect. In a moment when the future increasingly feels like a foregone conclusion, Future Perfect brought actors from a variety of world-building disciplines (from art and fiction, to law and science) together to explore the uses, abuses, and paradoxes of speculative futures. Curated by Data &amp; Society artist-in-residence Ingrid Burrington, Future Perfect was an experimental one-day, invitation-only conference originating from insights of the institute’s regular Speculative Fiction Reading Group.</p>
<p>Joanna Radin is Assistant Professor of the History of Science and Medicine at Yale where she teaches feminist and indigenous STS and the history of biomedicine and anthropology. Before receiving her PhD in History and Sociology of Science at UPenn she studied science communication at Cornell and worked as a risk communication specialist. She is the author of Life on Ice: A History of New Uses for Cold Blood, (University of Chicago Press, 2017) and a co-editor of Cyropolitics: Frozen Life in a Melting World (MIT Press, 2017). Radin is currently writing a book about science fiction, subjectivity, and biomedicine.</p>
]]></description>
      <pubDate>Wed, 28 Jun 2017 12:23:34 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Joanna Radin discusses the writing of Jurassic Park author Michael Crichton. Although Crichton is most famous for imagining an island of dinosaurs, metaphors in his non-fiction articles about computers that are even more terrifying.</p>
<p>This talk was presented for the event Future Perfect. In a moment when the future increasingly feels like a foregone conclusion, Future Perfect brought actors from a variety of world-building disciplines (from art and fiction, to law and science) together to explore the uses, abuses, and paradoxes of speculative futures. Curated by Data &amp; Society artist-in-residence Ingrid Burrington, Future Perfect was an experimental one-day, invitation-only conference originating from insights of the institute’s regular Speculative Fiction Reading Group.</p>
<p>Joanna Radin is Assistant Professor of the History of Science and Medicine at Yale where she teaches feminist and indigenous STS and the history of biomedicine and anthropology. Before receiving her PhD in History and Sociology of Science at UPenn she studied science communication at Cornell and worked as a risk communication specialist. She is the author of Life on Ice: A History of New Uses for Cold Blood, (University of Chicago Press, 2017) and a co-editor of Cyropolitics: Frozen Life in a Melting World (MIT Press, 2017). Radin is currently writing a book about science fiction, subjectivity, and biomedicine.</p>
]]></content:encoded>
      <enclosure length="20054493" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/3fdffccf-6d20-40ad-ac30-a8aa2f49b86d/ep0041_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Digital Dystopias: How Michael Crichton Taught Me To Start Worrying And Fear The Future</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:20:50</itunes:duration>
      <itunes:summary>Joanna Radin discusses the writing of Jurassic Park author Michael Crichton. Although Crichton is most famous for imagining an island of dinosaurs, metaphors in his non-fiction articles about computers that are even more terrifying.

This talk was presented for the event Future Perfect. In a moment when the future increasingly feels like a foregone conclusion, Future Perfect brought actors from a variety of world-building disciplines (from art and fiction, to law and science) together to explore the uses, abuses, and paradoxes of speculative futures. Curated by Data &amp; Society artist-in-residence Ingrid Burrington, Future Perfect was an experimental one-day, invitation-only conference originating from insights of the institute’s regular Speculative Fiction Reading Group.


Joanna Radin is Assistant Professor of the History of Science and Medicine at Yale where she teaches feminist and indigenous STS and the history of biomedicine and anthropology. Before receiving her PhD in History and Sociology of Science at UPenn she studied science communication at Cornell and worked as a risk communication specialist. She is the author of Life on Ice: A History of New Uses for Cold Blood, (University of Chicago Press, 2017) and a co-editor of Cyropolitics: Frozen Life in a Melting World (MIT Press, 2017). Radin is currently writing a book about science fiction, subjectivity, and biomedicine.</itunes:summary>
      <itunes:subtitle>Joanna Radin discusses the writing of Jurassic Park author Michael Crichton. Although Crichton is most famous for imagining an island of dinosaurs, metaphors in his non-fiction articles about computers that are even more terrifying.

This talk was presented for the event Future Perfect. In a moment when the future increasingly feels like a foregone conclusion, Future Perfect brought actors from a variety of world-building disciplines (from art and fiction, to law and science) together to explore the uses, abuses, and paradoxes of speculative futures. Curated by Data &amp; Society artist-in-residence Ingrid Burrington, Future Perfect was an experimental one-day, invitation-only conference originating from insights of the institute’s regular Speculative Fiction Reading Group.


Joanna Radin is Assistant Professor of the History of Science and Medicine at Yale where she teaches feminist and indigenous STS and the history of biomedicine and anthropology. Before receiving her PhD in History and Sociology of Science at UPenn she studied science communication at Cornell and worked as a risk communication specialist. She is the author of Life on Ice: A History of New Uses for Cold Blood, (University of Chicago Press, 2017) and a co-editor of Cyropolitics: Frozen Life in a Melting World (MIT Press, 2017). Radin is currently writing a book about science fiction, subjectivity, and biomedicine.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>38</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-06-26t21:42:05+00:00-ff2260a1f6e20af</guid>
      <title>Future Perfect: Designer and Discarded Genomes</title>
      <description><![CDATA[<p>Ruha Benjamin's presentation entitled &quot;designer and discarded genomes: experimenting with sociological imagination through speculative methods&quot; uses speculative field notes to explore the antecedents and implications of the current era of genetic engineering.</p>
<p>This talk was presented for the event Future Perfect. In a moment when the future increasingly feels like a foregone conclusion, Future Perfect brought actors from a variety of world-building disciplines (from art and fiction, to law and science) together to explore the uses, abuses, and paradoxes of speculative futures. Curated by Data &amp; Society artist-in-residence Ingrid Burrington, Future Perfect was an experimental one-day, invitation-only conference originating from insights of the institute’s regular Speculative Fiction Reading Group.</p>
<p>Ruha Benjamin is Assistant Professor of African American Studies at Princeton University, author of People’s Science: Bodies and Rights on the Stem Cell Frontier (Stanford University Press), and 2016-17 fellow at the Institute for Advanced Study. Her work examines the social dimensions of science, technology, and medicine with a particular focus on the relationship between innovation and inequity. She earned her PhD in Sociology from UC Berkeley, completed fellowships at UCLA’s Institute for Genetics and Society and Harvard’s Science, Technology, and Society Program, and has received grants and fellowships from the American Council of Learned Societies, National Science Foundation, Ford Foundation, and California Institute for Regenerative Medicine among others. Her work is published in numerous journals including Science, Technology, and Human Values; Ethnicity &amp; Health; and Annals of the American Academy of Social and Political Science.</p>
]]></description>
      <pubDate>Wed, 28 Jun 2017 12:22:28 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Ruha Benjamin's presentation entitled &quot;designer and discarded genomes: experimenting with sociological imagination through speculative methods&quot; uses speculative field notes to explore the antecedents and implications of the current era of genetic engineering.</p>
<p>This talk was presented for the event Future Perfect. In a moment when the future increasingly feels like a foregone conclusion, Future Perfect brought actors from a variety of world-building disciplines (from art and fiction, to law and science) together to explore the uses, abuses, and paradoxes of speculative futures. Curated by Data &amp; Society artist-in-residence Ingrid Burrington, Future Perfect was an experimental one-day, invitation-only conference originating from insights of the institute’s regular Speculative Fiction Reading Group.</p>
<p>Ruha Benjamin is Assistant Professor of African American Studies at Princeton University, author of People’s Science: Bodies and Rights on the Stem Cell Frontier (Stanford University Press), and 2016-17 fellow at the Institute for Advanced Study. Her work examines the social dimensions of science, technology, and medicine with a particular focus on the relationship between innovation and inequity. She earned her PhD in Sociology from UC Berkeley, completed fellowships at UCLA’s Institute for Genetics and Society and Harvard’s Science, Technology, and Society Program, and has received grants and fellowships from the American Council of Learned Societies, National Science Foundation, Ford Foundation, and California Institute for Regenerative Medicine among others. Her work is published in numerous journals including Science, Technology, and Human Values; Ethnicity &amp; Health; and Annals of the American Academy of Social and Political Science.</p>
]]></content:encoded>
      <enclosure length="16444906" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/2bd6440e-b75f-4e8e-914a-a2e8cc7b6eb5/ep0040_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Future Perfect: Designer and Discarded Genomes</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:17:04</itunes:duration>
      <itunes:summary>Ruha Benjamin&apos;s presentation entitled &quot;designer and discarded genomes: experimenting with sociological imagination through speculative methods&quot; uses speculative field notes to explore the antecedents and implications of the current era of genetic engineering.

This talk was presented for the event Future Perfect. In a moment when the future increasingly feels like a foregone conclusion, Future Perfect brought actors from a variety of world-building disciplines (from art and fiction, to law and science) together to explore the uses, abuses, and paradoxes of speculative futures. Curated by Data &amp; Society artist-in-residence Ingrid Burrington, Future Perfect was an experimental one-day, invitation-only conference originating from insights of the institute’s regular Speculative Fiction Reading Group.

Ruha Benjamin is Assistant Professor of African American Studies at Princeton University, author of People’s Science: Bodies and Rights on the Stem Cell Frontier (Stanford University Press), and 2016-17 fellow at the Institute for Advanced Study. Her work examines the social dimensions of science, technology, and medicine with a particular focus on the relationship between innovation and inequity. She earned her PhD in Sociology from UC Berkeley, completed fellowships at UCLA’s Institute for Genetics and Society and Harvard’s Science, Technology, and Society Program, and has received grants and fellowships from the American Council of Learned Societies, National Science Foundation, Ford Foundation, and California Institute for Regenerative Medicine among others. Her work is published in numerous journals including Science, Technology, and Human Values; Ethnicity &amp; Health; and Annals of the American Academy of Social and Political Science.</itunes:summary>
      <itunes:subtitle>Ruha Benjamin&apos;s presentation entitled &quot;designer and discarded genomes: experimenting with sociological imagination through speculative methods&quot; uses speculative field notes to explore the antecedents and implications of the current era of genetic engineering.

This talk was presented for the event Future Perfect. In a moment when the future increasingly feels like a foregone conclusion, Future Perfect brought actors from a variety of world-building disciplines (from art and fiction, to law and science) together to explore the uses, abuses, and paradoxes of speculative futures. Curated by Data &amp; Society artist-in-residence Ingrid Burrington, Future Perfect was an experimental one-day, invitation-only conference originating from insights of the institute’s regular Speculative Fiction Reading Group.

Ruha Benjamin is Assistant Professor of African American Studies at Princeton University, author of People’s Science: Bodies and Rights on the Stem Cell Frontier (Stanford University Press), and 2016-17 fellow at the Institute for Advanced Study. Her work examines the social dimensions of science, technology, and medicine with a particular focus on the relationship between innovation and inequity. She earned her PhD in Sociology from UC Berkeley, completed fellowships at UCLA’s Institute for Genetics and Society and Harvard’s Science, Technology, and Society Program, and has received grants and fellowships from the American Council of Learned Societies, National Science Foundation, Ford Foundation, and California Institute for Regenerative Medicine among others. Her work is published in numerous journals including Science, Technology, and Human Values; Ethnicity &amp; Health; and Annals of the American Academy of Social and Political Science.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>37</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-06-21t07:03:40+00:00-e8ed5d59c5203a4</guid>
      <title>Databites 100 Series: Machine Learning: What’s Fair and How Do We Decide?</title>
      <description><![CDATA[<p>Suchana Seth speaks about different definitions of fairness in the context of machine learning. Suchana Seth is a physicist-turned-data scientist from India. She has built scalable data science solutions for startups and industry research labs, and holds patents in text mining and natural language processing. Suchana believes in the power of data to drive positive change, volunteers with DataKind, mentors data-for-good projects, and advises research on IoT ethics. She is also passionate about closing the gender gap in data science, and leads data science workshops with organizations like Women Who Code.</p>
]]></description>
      <pubDate>Thu, 22 Jun 2017 18:43:42 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Suchana Seth speaks about different definitions of fairness in the context of machine learning. Suchana Seth is a physicist-turned-data scientist from India. She has built scalable data science solutions for startups and industry research labs, and holds patents in text mining and natural language processing. Suchana believes in the power of data to drive positive change, volunteers with DataKind, mentors data-for-good projects, and advises research on IoT ethics. She is also passionate about closing the gender gap in data science, and leads data science workshops with organizations like Women Who Code.</p>
]]></content:encoded>
      <enclosure length="8766076" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/dc8cc012-d351-420c-a06b-48df65be4328/ep0039_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databites 100 Series: Machine Learning: What’s Fair and How Do We Decide?</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:09:05</itunes:duration>
      <itunes:summary>Suchana Seth speaks about different definitions of fairness in the context of machine learning. Suchana Seth is a physicist-turned-data scientist from India. She has built scalable data science solutions for startups and industry research labs, and holds patents in text mining and natural language processing. Suchana believes in the power of data to drive positive change, volunteers with DataKind, mentors data-for-good projects, and advises research on IoT ethics. She is also passionate about closing the gender gap in data science, and leads data science workshops with organizations like Women Who Code.</itunes:summary>
      <itunes:subtitle>Suchana Seth speaks about different definitions of fairness in the context of machine learning. Suchana Seth is a physicist-turned-data scientist from India. She has built scalable data science solutions for startups and industry research labs, and holds patents in text mining and natural language processing. Suchana believes in the power of data to drive positive change, volunteers with DataKind, mentors data-for-good projects, and advises research on IoT ethics. She is also passionate about closing the gender gap in data science, and leads data science workshops with organizations like Women Who Code.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>36</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-06-21t07:19:38+00:00-b083750ad099d4b</guid>
      <title>Databites 100 Series: Stats and the City: A Data-Driven Approach to Criminal Justice and Child Welfare</title>
      <description><![CDATA[<p>Ravi Shroff speaks about his research studying predictive models for decision-making in city and state government. Ravi Shroff is a Research Scientist at New York University’s Center for Urban Science and Progress (CUSP), where he specializes in computational social science. His work involves using statistical and machine learning techniques to understand the criminal justice system, child welfare, and related urban issues. At Data &amp; Society, Ravi will examine how simple computational models can be designed and implemented in city government. He studied mathematics at UC San Diego (PhD) and applied urban science and informatics at CUSP (MS).</p>
]]></description>
      <pubDate>Thu, 22 Jun 2017 18:37:45 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Ravi Shroff speaks about his research studying predictive models for decision-making in city and state government. Ravi Shroff is a Research Scientist at New York University’s Center for Urban Science and Progress (CUSP), where he specializes in computational social science. His work involves using statistical and machine learning techniques to understand the criminal justice system, child welfare, and related urban issues. At Data &amp; Society, Ravi will examine how simple computational models can be designed and implemented in city government. He studied mathematics at UC San Diego (PhD) and applied urban science and informatics at CUSP (MS).</p>
]]></content:encoded>
      <enclosure length="12387935" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/4ed13b1e-7e0e-48b6-a03d-63a7ebe7d0e7/ep0038_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databites 100 Series: Stats and the City: A Data-Driven Approach to Criminal Justice and Child Welfare</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:12:51</itunes:duration>
      <itunes:summary>Ravi Shroff speaks about his research studying predictive models for decision-making in city and state government. Ravi Shroff is a Research Scientist at New York University’s Center for Urban Science and Progress (CUSP), where he specializes in computational social science. His work involves using statistical and machine learning techniques to understand the criminal justice system, child welfare, and related urban issues. At Data &amp; Society, Ravi will examine how simple computational models can be designed and implemented in city government. He studied mathematics at UC San Diego (PhD) and applied urban science and informatics at CUSP (MS).</itunes:summary>
      <itunes:subtitle>Ravi Shroff speaks about his research studying predictive models for decision-making in city and state government. Ravi Shroff is a Research Scientist at New York University’s Center for Urban Science and Progress (CUSP), where he specializes in computational social science. His work involves using statistical and machine learning techniques to understand the criminal justice system, child welfare, and related urban issues. At Data &amp; Society, Ravi will examine how simple computational models can be designed and implemented in city government. He studied mathematics at UC San Diego (PhD) and applied urban science and informatics at CUSP (MS).</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>35</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-06-21t07:27:41+00:00-37d6b02b60bfe6f</guid>
      <title>Databites 100 Series: Data Science Reasoning</title>
      <description><![CDATA[<p>Anne Washington talks about the risks of efficiency and the need for a common language when speaking about data science and public policy. Anne L. Washington is a computer scientist and a librarian who specializes in public sector technology management and informatics. She is an Assistant Professor at George Mason University. As a digital government scholar, her research focuses on the production, meaning, and retrieval of public sector information. She developed her expertise on government data working at the Congressional Research Service within the Library of Congress. She also served as an invited expert to the W3C E-Government Interest Group and the W3C Government Linked Data Working Group. She completed a PhD from The George Washington University School of Business. She holds a degree in computer science from Brown University and a Master’s in Library Information Science from Rutgers University. Before completing her PhD, she had extensive work experience in the private sector including the Claris Software division of Apple Computers and Barclays Global Investors.</p>
]]></description>
      <pubDate>Thu, 22 Jun 2017 18:34:36 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Anne Washington talks about the risks of efficiency and the need for a common language when speaking about data science and public policy. Anne L. Washington is a computer scientist and a librarian who specializes in public sector technology management and informatics. She is an Assistant Professor at George Mason University. As a digital government scholar, her research focuses on the production, meaning, and retrieval of public sector information. She developed her expertise on government data working at the Congressional Research Service within the Library of Congress. She also served as an invited expert to the W3C E-Government Interest Group and the W3C Government Linked Data Working Group. She completed a PhD from The George Washington University School of Business. She holds a degree in computer science from Brown University and a Master’s in Library Information Science from Rutgers University. Before completing her PhD, she had extensive work experience in the private sector including the Claris Software division of Apple Computers and Barclays Global Investors.</p>
]]></content:encoded>
      <enclosure length="11386423" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/108e2142-3abc-45cd-bb74-a8fc4668cb66/ep0037_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databites 100 Series: Data Science Reasoning</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:11:49</itunes:duration>
      <itunes:summary>Anne Washington talks about the risks of efficiency and the need for a common language when speaking about data science and public policy. Anne L. Washington is a computer scientist and a librarian who specializes in public sector technology management and informatics. She is an Assistant Professor at George Mason University. As a digital government scholar, her research focuses on the production, meaning, and retrieval of public sector information. She developed her expertise on government data working at the Congressional Research Service within the Library of Congress. She also served as an invited expert to the W3C E-Government Interest Group and the W3C Government Linked Data Working Group. She completed a PhD from The George Washington University School of Business. She holds a degree in computer science from Brown University and a Master’s in Library Information Science from Rutgers University. Before completing her PhD, she had extensive work experience in the private sector including the Claris Software division of Apple Computers and Barclays Global Investors.</itunes:summary>
      <itunes:subtitle>Anne Washington talks about the risks of efficiency and the need for a common language when speaking about data science and public policy. Anne L. Washington is a computer scientist and a librarian who specializes in public sector technology management and informatics. She is an Assistant Professor at George Mason University. As a digital government scholar, her research focuses on the production, meaning, and retrieval of public sector information. She developed her expertise on government data working at the Congressional Research Service within the Library of Congress. She also served as an invited expert to the W3C E-Government Interest Group and the W3C Government Linked Data Working Group. She completed a PhD from The George Washington University School of Business. She holds a degree in computer science from Brown University and a Master’s in Library Information Science from Rutgers University. Before completing her PhD, she had extensive work experience in the private sector including the Claris Software division of Apple Computers and Barclays Global Investors.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>34</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-06-14t05:59:57+00:00-bd64826dee3baba</guid>
      <title>Databites 100 Series: Trade Secrets and Black-Boxing Criminal Justice</title>
      <description><![CDATA[<p>Rebecca Wexler speaks about the dangers of Trade Secrets and Black-boxing Criminal Justice. Rebecca Wexler works with The Legal Aid Society to advocate for more lenient criminal discovery laws; draft legal motions to compel disclosure of data and source code for forensic technologies; and build partnerships with technology companies to facilitate a reasoned approach to defendants’ requests for user information.</p>
]]></description>
      <pubDate>Wed, 14 Jun 2017 18:31:51 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Rebecca Wexler speaks about the dangers of Trade Secrets and Black-boxing Criminal Justice. Rebecca Wexler works with The Legal Aid Society to advocate for more lenient criminal discovery laws; draft legal motions to compel disclosure of data and source code for forensic technologies; and build partnerships with technology companies to facilitate a reasoned approach to defendants’ requests for user information.</p>
]]></content:encoded>
      <enclosure length="13140290" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/8945f890-1da8-4e3d-873f-124188b26f91/ep0034_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databites 100 Series: Trade Secrets and Black-Boxing Criminal Justice</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:13:39</itunes:duration>
      <itunes:summary>Rebecca Wexler speaks about the dangers of Trade Secrets and Black-boxing Criminal Justice. Rebecca Wexler works with The Legal Aid Society to advocate for more lenient criminal discovery laws; draft legal motions to compel disclosure of data and source code for forensic technologies; and build partnerships with technology companies to facilitate a reasoned approach to defendants’ requests for user information.</itunes:summary>
      <itunes:subtitle>Rebecca Wexler speaks about the dangers of Trade Secrets and Black-boxing Criminal Justice. Rebecca Wexler works with The Legal Aid Society to advocate for more lenient criminal discovery laws; draft legal motions to compel disclosure of data and source code for forensic technologies; and build partnerships with technology companies to facilitate a reasoned approach to defendants’ requests for user information.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>33</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-06-14t05:54:23+00:00-fa4f63b17644819</guid>
      <title>Databites 100 Series: The DNA Revolution: Merging Data with Biology</title>
      <description><![CDATA[<p>Daniel Grushkin speaks about the origin of GenSpace, The DNA Revolution, and merging data with biology. Daniel Grushkin is the Executive Director and cofounder of Genspace, a nonprofit community laboratory dedicated to promoting citizen science and access to biotechnology.</p>
]]></description>
      <pubDate>Wed, 14 Jun 2017 18:16:40 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Daniel Grushkin speaks about the origin of GenSpace, The DNA Revolution, and merging data with biology. Daniel Grushkin is the Executive Director and cofounder of Genspace, a nonprofit community laboratory dedicated to promoting citizen science and access to biotechnology.</p>
]]></content:encoded>
      <enclosure length="11931051" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/70b60774-b10d-41e6-a94f-03af045da8d0/ep0035_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databites 100 Series: The DNA Revolution: Merging Data with Biology</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:12:23</itunes:duration>
      <itunes:summary>Daniel Grushkin speaks about the origin of GenSpace, The DNA Revolution, and merging data with biology. Daniel Grushkin is the Executive Director and cofounder of Genspace, a nonprofit community laboratory dedicated to promoting citizen science and access to biotechnology.</itunes:summary>
      <itunes:subtitle>Daniel Grushkin speaks about the origin of GenSpace, The DNA Revolution, and merging data with biology. Daniel Grushkin is the Executive Director and cofounder of Genspace, a nonprofit community laboratory dedicated to promoting citizen science and access to biotechnology.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>32</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-06-14t05:06:57+00:00-7a5f71f5acb2347</guid>
      <title>Databites 100 Series: Media Manipulation and Disinformation Online</title>
      <description><![CDATA[<p>Alice Marwick speaks about Data &amp; Society’s recent report on Media Manipulation and Disinformation Online. Alice Marwick leads the Media Manipulation project at Data &amp; Society, and will join the Communication department at the University of North Carolina Chapel Hill in the Fall.</p>
]]></description>
      <pubDate>Wed, 14 Jun 2017 18:15:27 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Alice Marwick speaks about Data &amp; Society’s recent report on Media Manipulation and Disinformation Online. Alice Marwick leads the Media Manipulation project at Data &amp; Society, and will join the Communication department at the University of North Carolina Chapel Hill in the Fall.</p>
]]></content:encoded>
      <enclosure length="14793900" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/1cb737b3-facb-4c93-8e56-e032bdf827c7/ep0036_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Databites 100 Series: Media Manipulation and Disinformation Online</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:15:22</itunes:duration>
      <itunes:summary>Alice Marwick speaks about Data &amp; Society’s recent report on Media Manipulation and Disinformation Online. Alice Marwick leads the Media Manipulation project at Data &amp; Society, and will join the Communication department at the University of North Carolina Chapel Hill in the Fall.</itunes:summary>
      <itunes:subtitle>Alice Marwick speaks about Data &amp; Society’s recent report on Media Manipulation and Disinformation Online. Alice Marwick leads the Media Manipulation project at Data &amp; Society, and will join the Communication department at the University of North Carolina Chapel Hill in the Fall.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>31</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-05-22t19:33:15+00:00-be50448687b7957</guid>
      <title>The Ambivalent Internet: Mischief, Oddity, and Antagonism Online</title>
      <description><![CDATA[<p>Whitney Philips and Ryan M. Milner explore the weird and mean and in-between that characterizes everyday expression online, from absurdist photoshops to antagonistic Twitter hashtags to ambivalent online play with the 2016 U.S. Presidential election. Through these discussions, the book shows how digital media can help and harm, bring together and push apart, and make laugh and make angry in equal measure. Most significant to the current political climate, it shows how these media can equally facilitate and restrict voice. Not only do digital spaces and tools empower hate groups like the white nationalist alt-right and other extremist figures, they also empower progressive pushback against these same groups and figures—along with a whole range of folkloric play that eludes easy classification. By foregrounding the fundamental ambivalence of digital media, they demonstrate that there are no easy solutions, and no simplistic, one-size-fits-all answers, to pressing questions about free expression, democratic participation, and issues of basic safety on the contemporary internet.</p>
]]></description>
      <pubDate>Wed, 24 May 2017 15:56:41 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Whitney Philips and Ryan M. Milner explore the weird and mean and in-between that characterizes everyday expression online, from absurdist photoshops to antagonistic Twitter hashtags to ambivalent online play with the 2016 U.S. Presidential election. Through these discussions, the book shows how digital media can help and harm, bring together and push apart, and make laugh and make angry in equal measure. Most significant to the current political climate, it shows how these media can equally facilitate and restrict voice. Not only do digital spaces and tools empower hate groups like the white nationalist alt-right and other extremist figures, they also empower progressive pushback against these same groups and figures—along with a whole range of folkloric play that eludes easy classification. By foregrounding the fundamental ambivalence of digital media, they demonstrate that there are no easy solutions, and no simplistic, one-size-fits-all answers, to pressing questions about free expression, democratic participation, and issues of basic safety on the contemporary internet.</p>
]]></content:encoded>
      <enclosure length="50188916" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/2a691613-9c63-4305-974d-49911e77d405/ep0033_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>The Ambivalent Internet: Mischief, Oddity, and Antagonism Online</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:52:13</itunes:duration>
      <itunes:summary>Whitney Philips and Ryan M. Milner explore the weird and mean and in-between that characterizes everyday expression online, from absurdist photoshops to antagonistic Twitter hashtags to ambivalent online play with the 2016 U.S. Presidential election. Through these discussions, the book shows how digital media can help and harm, bring together and push apart, and make laugh and make angry in equal measure. Most significant to the current political climate, it shows how these media can equally facilitate and restrict voice. Not only do digital spaces and tools empower hate groups like the white nationalist alt-right and other extremist figures, they also empower progressive pushback against these same groups and figures—along with a whole range of folkloric play that eludes easy classification. By foregrounding the fundamental ambivalence of digital media, they demonstrate that there are no easy solutions, and no simplistic, one-size-fits-all answers, to pressing questions about free expression, democratic participation, and issues of basic safety on the contemporary internet.</itunes:summary>
      <itunes:subtitle>Whitney Philips and Ryan M. Milner explore the weird and mean and in-between that characterizes everyday expression online, from absurdist photoshops to antagonistic Twitter hashtags to ambivalent online play with the 2016 U.S. Presidential election. Through these discussions, the book shows how digital media can help and harm, bring together and push apart, and make laugh and make angry in equal measure. Most significant to the current political climate, it shows how these media can equally facilitate and restrict voice. Not only do digital spaces and tools empower hate groups like the white nationalist alt-right and other extremist figures, they also empower progressive pushback against these same groups and figures—along with a whole range of folkloric play that eludes easy classification. By foregrounding the fundamental ambivalence of digital media, they demonstrate that there are no easy solutions, and no simplistic, one-size-fits-all answers, to pressing questions about free expression, democratic participation, and issues of basic safety on the contemporary internet.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>30</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-05-15t15:10:25+00:00-509d2281f47e8f6</guid>
      <title>AI in the Open World: Directions, Challenges, and Futures</title>
      <description><![CDATA[<p>Eric Horvitz – Artificial intelligence (AI) is at an inflection point and is poised to move into the open world and into our lives in numerous ways that will have numerous influences on people and society. While AI promises to provide great value, along with the aspirations come concerns about inadvertent costs, rough edges, and failures. Concerns include failures of automation in the open world, biased data and algorithms, opacity of reasoning, adversarial attacks on AI systems, and runaway AI. Horvitz will discuss short- and longer-term challenges and discuss studies aimed at addressing concerns, including the One Hundred Year Study on AI at Stanford University and the Partnership on AI to Benefit People and Society.</p>
<p>Eric Horvitz is a technical fellow and director at Microsoft Research. His interests span theoretical and practical challenges in AI and he has made contributions in machine learning, perception, decision making, and human-computer interaction. More information and publications are available at http://erichorvitz.com.</p>
]]></description>
      <pubDate>Tue, 16 May 2017 18:24:46 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Eric Horvitz – Artificial intelligence (AI) is at an inflection point and is poised to move into the open world and into our lives in numerous ways that will have numerous influences on people and society. While AI promises to provide great value, along with the aspirations come concerns about inadvertent costs, rough edges, and failures. Concerns include failures of automation in the open world, biased data and algorithms, opacity of reasoning, adversarial attacks on AI systems, and runaway AI. Horvitz will discuss short- and longer-term challenges and discuss studies aimed at addressing concerns, including the One Hundred Year Study on AI at Stanford University and the Partnership on AI to Benefit People and Society.</p>
<p>Eric Horvitz is a technical fellow and director at Microsoft Research. His interests span theoretical and practical challenges in AI and he has made contributions in machine learning, perception, decision making, and human-computer interaction. More information and publications are available at http://erichorvitz.com.</p>
]]></content:encoded>
      <enclosure length="46795619" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/e9befd1e-b917-4f11-ae30-9dc876219537/ep0032_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>AI in the Open World: Directions, Challenges, and Futures</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:48:41</itunes:duration>
      <itunes:summary>Eric Horvitz – Artificial intelligence (AI) is at an inflection point and is poised to move into the open world and into our lives in numerous ways that will have numerous influences on people and society. While AI promises to provide great value, along with the aspirations come concerns about inadvertent costs, rough edges, and failures. Concerns include failures of automation in the open world, biased data and algorithms, opacity of reasoning, adversarial attacks on AI systems, and runaway AI. Horvitz will discuss short- and longer-term challenges and discuss studies aimed at addressing concerns, including the One Hundred Year Study on AI at Stanford University and the Partnership on AI to Benefit People and Society.

Eric Horvitz is a technical fellow and director at Microsoft Research. His interests span theoretical and practical challenges in AI and he has made contributions in machine learning, perception, decision making, and human-computer interaction. More information and publications are available at http://erichorvitz.com.</itunes:summary>
      <itunes:subtitle>Eric Horvitz – Artificial intelligence (AI) is at an inflection point and is poised to move into the open world and into our lives in numerous ways that will have numerous influences on people and society. While AI promises to provide great value, along with the aspirations come concerns about inadvertent costs, rough edges, and failures. Concerns include failures of automation in the open world, biased data and algorithms, opacity of reasoning, adversarial attacks on AI systems, and runaway AI. Horvitz will discuss short- and longer-term challenges and discuss studies aimed at addressing concerns, including the One Hundred Year Study on AI at Stanford University and the Partnership on AI to Benefit People and Society.

Eric Horvitz is a technical fellow and director at Microsoft Research. His interests span theoretical and practical challenges in AI and he has made contributions in machine learning, perception, decision making, and human-computer interaction. More information and publications are available at http://erichorvitz.com.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>29</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-05-01t20:14:37+00:00-4ee452dba872550</guid>
      <title>Why Should We Care About the Failure of the British Computing Industry?</title>
      <description><![CDATA[<p>Marie Hicks draws on the example of our closest historical cousin–the UK– to look at the ways in which computing initiatives often go wrong in unexpected ways at the national level. In 1944, the UK led the world in electronic computing. By 1974, the British computer industry was all but extinct. What happened in the intervening thirty years holds lessons for all postindustrial superpowers. This talk will outline the systematic processes deployed by the UK government to enhance the nation’s technological superiority–and through that its global political standing–and discuss why these efforts went disastrously wrong. The talk will conclude with a discussion of the ways the US is currently falling prey to similar errors of judgement in its attempts to leverage computing technology as an engine of social and economic change.</p>
<p>Marie Hicks is an assistant professor of history of technology at Illinois Institute of Technology in Chicago, Illinois. Her work focuses on how gender and sexuality bring hidden technological dynamics to light, and how women’s experiences change the core narrative of the history of computing. Hicks’s book, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing is available from MIT Press (2017). For more information, see programmedinequality.com. Hicks received her MA and Ph.D. from Duke University and her BA from Harvard University. Before entering academia, she worked as a UNIX systems administrator.</p>
]]></description>
      <pubDate>Tue, 2 May 2017 17:32:14 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Marie Hicks draws on the example of our closest historical cousin–the UK– to look at the ways in which computing initiatives often go wrong in unexpected ways at the national level. In 1944, the UK led the world in electronic computing. By 1974, the British computer industry was all but extinct. What happened in the intervening thirty years holds lessons for all postindustrial superpowers. This talk will outline the systematic processes deployed by the UK government to enhance the nation’s technological superiority–and through that its global political standing–and discuss why these efforts went disastrously wrong. The talk will conclude with a discussion of the ways the US is currently falling prey to similar errors of judgement in its attempts to leverage computing technology as an engine of social and economic change.</p>
<p>Marie Hicks is an assistant professor of history of technology at Illinois Institute of Technology in Chicago, Illinois. Her work focuses on how gender and sexuality bring hidden technological dynamics to light, and how women’s experiences change the core narrative of the history of computing. Hicks’s book, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing is available from MIT Press (2017). For more information, see programmedinequality.com. Hicks received her MA and Ph.D. from Duke University and her BA from Harvard University. Before entering academia, she worked as a UNIX systems administrator.</p>
]]></content:encoded>
      <enclosure length="28101996" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/3c2fb620-5077-474b-8346-d9e9b64d5c81/ep0031_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Why Should We Care About the Failure of the British Computing Industry?</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:29:12</itunes:duration>
      <itunes:summary>Marie Hicks draws on the example of our closest historical cousin–the UK– to look at the ways in which computing initiatives often go wrong in unexpected ways at the national level. In 1944, the UK led the world in electronic computing. By 1974, the British computer industry was all but extinct. What happened in the intervening thirty years holds lessons for all postindustrial superpowers. This talk will outline the systematic processes deployed by the UK government to enhance the nation’s technological superiority–and through that its global political standing–and discuss why these efforts went disastrously wrong. The talk will conclude with a discussion of the ways the US is currently falling prey to similar errors of judgement in its attempts to leverage computing technology as an engine of social and economic change.

Marie Hicks is an assistant professor of history of technology at Illinois Institute of Technology in Chicago, Illinois. Her work focuses on how gender and sexuality bring hidden technological dynamics to light, and how women’s experiences change the core narrative of the history of computing. Hicks’s book, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing is available from MIT Press (2017). For more information, see programmedinequality.com. Hicks received her MA and Ph.D. from Duke University and her BA from Harvard University. Before entering academia, she worked as a UNIX systems administrator.</itunes:summary>
      <itunes:subtitle>Marie Hicks draws on the example of our closest historical cousin–the UK– to look at the ways in which computing initiatives often go wrong in unexpected ways at the national level. In 1944, the UK led the world in electronic computing. By 1974, the British computer industry was all but extinct. What happened in the intervening thirty years holds lessons for all postindustrial superpowers. This talk will outline the systematic processes deployed by the UK government to enhance the nation’s technological superiority–and through that its global political standing–and discuss why these efforts went disastrously wrong. The talk will conclude with a discussion of the ways the US is currently falling prey to similar errors of judgement in its attempts to leverage computing technology as an engine of social and economic change.

Marie Hicks is an assistant professor of history of technology at Illinois Institute of Technology in Chicago, Illinois. Her work focuses on how gender and sexuality bring hidden technological dynamics to light, and how women’s experiences change the core narrative of the history of computing. Hicks’s book, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing is available from MIT Press (2017). For more information, see programmedinequality.com. Hicks received her MA and Ph.D. from Duke University and her BA from Harvard University. Before entering academia, she worked as a UNIX systems administrator.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>28</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-04-25t09:17:01+00:00-3878ad5481ba1f7</guid>
      <title>Encoding Race, Encoding Class: Indian IT Workers in Berlin</title>
      <description><![CDATA[<p>Sareeta Amrute, professor of anthropology at the University of Washington, describes her research on the professional and private lives of highly skilled Indian IT coders in Berlin to reveal the oft-obscured realities of the embodied, raced, and classed nature of cognitive labor.</p>
]]></description>
      <pubDate>Tue, 25 Apr 2017 21:24:55 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Sareeta Amrute, professor of anthropology at the University of Washington, describes her research on the professional and private lives of highly skilled Indian IT coders in Berlin to reveal the oft-obscured realities of the embodied, raced, and classed nature of cognitive labor.</p>
]]></content:encoded>
      <enclosure length="25981734" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/55974769-da7a-405c-89ec-c13047821d6f/ep0029_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Encoding Race, Encoding Class: Indian IT Workers in Berlin</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:26:59</itunes:duration>
      <itunes:summary>Sareeta Amrute, professor of anthropology at the University of Washington, describes her research on the professional and private lives of highly skilled Indian IT coders in Berlin to reveal the oft-obscured realities of the embodied, raced, and classed nature of cognitive labor.</itunes:summary>
      <itunes:subtitle>Sareeta Amrute, professor of anthropology at the University of Washington, describes her research on the professional and private lives of highly skilled Indian IT coders in Berlin to reveal the oft-obscured realities of the embodied, raced, and classed nature of cognitive labor.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>27</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-04-25t09:02:46+00:00-7049769e72f027e</guid>
      <title>Post-Truth and New Realities: Algorithms, Alternative Facts, and Digital Ethics</title>
      <description><![CDATA[<p>Maurizio Ferraris and Martin Scherzinger - Recent scandals around alternative facts, post-truth, and hacking have raised a constellation of questions regarding the intersection of digital tools, the construction or verification of reality, and issues of power and authorship. Such questions have been at the center of theoretical and literary discussions in continental philosophy and critical theory for some years, drawing from or pushing against post-structuralist assertions regarding the death of the author and the relativism of ontology. Today, these questions are articulated in the realm of techno-politics with a new urgency.</p>
<p>The talk was moderated by Jessica Feldman from New York University’s Department of Media, Culture, and Communication, and hosted by Robyn Caplan from Data &amp; Society Research Institute.</p>
]]></description>
      <pubDate>Tue, 25 Apr 2017 21:21:41 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Maurizio Ferraris and Martin Scherzinger - Recent scandals around alternative facts, post-truth, and hacking have raised a constellation of questions regarding the intersection of digital tools, the construction or verification of reality, and issues of power and authorship. Such questions have been at the center of theoretical and literary discussions in continental philosophy and critical theory for some years, drawing from or pushing against post-structuralist assertions regarding the death of the author and the relativism of ontology. Today, these questions are articulated in the realm of techno-politics with a new urgency.</p>
<p>The talk was moderated by Jessica Feldman from New York University’s Department of Media, Culture, and Communication, and hosted by Robyn Caplan from Data &amp; Society Research Institute.</p>
]]></content:encoded>
      <enclosure length="42433401" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/1fe8bc5d-6241-421a-a846-37a86b3229f3/ep0030_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Post-Truth and New Realities: Algorithms, Alternative Facts, and Digital Ethics</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:44:09</itunes:duration>
      <itunes:summary>Maurizio Ferraris and Martin Scherzinger - Recent scandals around alternative facts, post-truth, and hacking have raised a constellation of questions regarding the intersection of digital tools, the construction or verification of reality, and issues of power and authorship. Such questions have been at the center of theoretical and literary discussions in continental philosophy and critical theory for some years, drawing from or pushing against post-structuralist assertions regarding the death of the author and the relativism of ontology. Today, these questions are articulated in the realm of techno-politics with a new urgency.

The talk was moderated by Jessica Feldman from New York University’s Department of Media, Culture, and Communication, and hosted by Robyn Caplan from Data &amp; Society Research Institute.</itunes:summary>
      <itunes:subtitle>Maurizio Ferraris and Martin Scherzinger - Recent scandals around alternative facts, post-truth, and hacking have raised a constellation of questions regarding the intersection of digital tools, the construction or verification of reality, and issues of power and authorship. Such questions have been at the center of theoretical and literary discussions in continental philosophy and critical theory for some years, drawing from or pushing against post-structuralist assertions regarding the death of the author and the relativism of ontology. Today, these questions are articulated in the realm of techno-politics with a new urgency.

The talk was moderated by Jessica Feldman from New York University’s Department of Media, Culture, and Communication, and hosted by Robyn Caplan from Data &amp; Society Research Institute.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>26</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-04-11t17:28:49+00:00-3f0b89865917c43</guid>
      <title>Giving Voice: Mobile Communication, Disability, and Inequality</title>
      <description><![CDATA[<p>Meryl Alper- Mobile communication technologies are often hailed in the popular press and public policy as a means of “giving voice to the voiceless.” Behind the praise are determinist beliefs about technology as a gateway to opportunity, voice as a metaphor for agency and self-representation, and voicelessness as a stable and natural category. In this talk, based on her new book Giving Voice: Mobile Communication, Disability, and Inequality (MIT Press, 2017), Meryl Alper offers a new angle on these established critiques through a qualitative study of individuals with significant communication disabilities who use mobile devices for synthetic speech output. Alper finds that despite widespread claims to empowerment, these tools are still subject to disempowering structural inequalities. Culture, laws, institutions, and even technology itself can reinforce disparities among those with disabilities across class, race, ethnicity, and gender. Alper argues that voice is an overused and imprecise metaphor in media and communication studies, one that abstracts, obscures, and oversimplifies the human experience of disability. She will discuss implications of her research for our rapidly changing media ecology and political environment, where the question is not only which voices get to speak, but also who is thought to have a voice to speak with in the first place.</p>
]]></description>
      <pubDate>Wed, 12 Apr 2017 15:14:41 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Meryl Alper- Mobile communication technologies are often hailed in the popular press and public policy as a means of “giving voice to the voiceless.” Behind the praise are determinist beliefs about technology as a gateway to opportunity, voice as a metaphor for agency and self-representation, and voicelessness as a stable and natural category. In this talk, based on her new book Giving Voice: Mobile Communication, Disability, and Inequality (MIT Press, 2017), Meryl Alper offers a new angle on these established critiques through a qualitative study of individuals with significant communication disabilities who use mobile devices for synthetic speech output. Alper finds that despite widespread claims to empowerment, these tools are still subject to disempowering structural inequalities. Culture, laws, institutions, and even technology itself can reinforce disparities among those with disabilities across class, race, ethnicity, and gender. Alper argues that voice is an overused and imprecise metaphor in media and communication studies, one that abstracts, obscures, and oversimplifies the human experience of disability. She will discuss implications of her research for our rapidly changing media ecology and political environment, where the question is not only which voices get to speak, but also who is thought to have a voice to speak with in the first place.</p>
]]></content:encoded>
      <enclosure length="55702447" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/f858848c-dd63-47b9-a785-f7cfc044891f/ep0028_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Giving Voice: Mobile Communication, Disability, and Inequality</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:57:56</itunes:duration>
      <itunes:summary>Meryl Alper- Mobile communication technologies are often hailed in the popular press and public policy as a means of “giving voice to the voiceless.” Behind the praise are determinist beliefs about technology as a gateway to opportunity, voice as a metaphor for agency and self-representation, and voicelessness as a stable and natural category. In this talk, based on her new book Giving Voice: Mobile Communication, Disability, and Inequality (MIT Press, 2017), Meryl Alper offers a new angle on these established critiques through a qualitative study of individuals with significant communication disabilities who use mobile devices for synthetic speech output. Alper finds that despite widespread claims to empowerment, these tools are still subject to disempowering structural inequalities. Culture, laws, institutions, and even technology itself can reinforce disparities among those with disabilities across class, race, ethnicity, and gender. Alper argues that voice is an overused and imprecise metaphor in media and communication studies, one that abstracts, obscures, and oversimplifies the human experience of disability. She will discuss implications of her research for our rapidly changing media ecology and political environment, where the question is not only which voices get to speak, but also who is thought to have a voice to speak with in the first place.</itunes:summary>
      <itunes:subtitle>Meryl Alper- Mobile communication technologies are often hailed in the popular press and public policy as a means of “giving voice to the voiceless.” Behind the praise are determinist beliefs about technology as a gateway to opportunity, voice as a metaphor for agency and self-representation, and voicelessness as a stable and natural category. In this talk, based on her new book Giving Voice: Mobile Communication, Disability, and Inequality (MIT Press, 2017), Meryl Alper offers a new angle on these established critiques through a qualitative study of individuals with significant communication disabilities who use mobile devices for synthetic speech output. Alper finds that despite widespread claims to empowerment, these tools are still subject to disempowering structural inequalities. Culture, laws, institutions, and even technology itself can reinforce disparities among those with disabilities across class, race, ethnicity, and gender. Alper argues that voice is an overused and imprecise metaphor in media and communication studies, one that abstracts, obscures, and oversimplifies the human experience of disability. She will discuss implications of her research for our rapidly changing media ecology and political environment, where the question is not only which voices get to speak, but also who is thought to have a voice to speak with in the first place.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>25</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-03-22t07:27:39+00:00-c2fba75de8cbbb3</guid>
      <title>BioTech Futures Part 2</title>
      <description><![CDATA[<p>Christina Agapakis, the creative director of Ginkgo Bioworks and one of the world’s first biodesigners, discusses biotechnology and feminism.</p>
<p>In the early 2000s, a group of scientists from outside mainstream biology proposed that they would make living things behave like computers. They would treat DNA like command code; they would make cells behave with Boolean logic; and ultimately they would make life programmable. They called their field synthetic biology. Since its inception, synthetic biology has influenced the practice biological research, current understanding of biological systems, and the biotech economy— by 2019 the global synthetic biology market is projected to be worth $13.4 billion.</p>
<p>ABOUT THE SERIES</p>
<p>The Biotech Futures Talk + Lab Series explores the implications of and ways in which biology is becoming a data science. Each talk is paired with a 3-4 hour lab workshop at Genspace for Data &amp; Society and Genspace community members to demonstrate how these themes become realized in the lab.</p>
<p>Christina Agapakis is creative director of Ginkgo Bioworks, a biological design company growing cultured products for partners across many industries. Her work brings together biologists, engineers, designers, artists, and social scientists to explore the future of biotechnology. During her PhD at Harvard, she worked on producing hydrogen fuel in bacteria and making photosynthetic animals. She has taught designers at the Art Center College of Design and biomolecular engineers at UCLA, and she once made cheese using bacteria from the human body.</p>
]]></description>
      <pubDate>Wed, 22 Mar 2017 17:48:37 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Christina Agapakis, the creative director of Ginkgo Bioworks and one of the world’s first biodesigners, discusses biotechnology and feminism.</p>
<p>In the early 2000s, a group of scientists from outside mainstream biology proposed that they would make living things behave like computers. They would treat DNA like command code; they would make cells behave with Boolean logic; and ultimately they would make life programmable. They called their field synthetic biology. Since its inception, synthetic biology has influenced the practice biological research, current understanding of biological systems, and the biotech economy— by 2019 the global synthetic biology market is projected to be worth $13.4 billion.</p>
<p>ABOUT THE SERIES</p>
<p>The Biotech Futures Talk + Lab Series explores the implications of and ways in which biology is becoming a data science. Each talk is paired with a 3-4 hour lab workshop at Genspace for Data &amp; Society and Genspace community members to demonstrate how these themes become realized in the lab.</p>
<p>Christina Agapakis is creative director of Ginkgo Bioworks, a biological design company growing cultured products for partners across many industries. Her work brings together biologists, engineers, designers, artists, and social scientists to explore the future of biotechnology. During her PhD at Harvard, she worked on producing hydrogen fuel in bacteria and making photosynthetic animals. She has taught designers at the Art Center College of Design and biomolecular engineers at UCLA, and she once made cheese using bacteria from the human body.</p>
]]></content:encoded>
      <enclosure length="28878903" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/1d569876-9bca-4f5c-a760-fb1f3a86fb35/ep0027_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>BioTech Futures Part 2</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:30:02</itunes:duration>
      <itunes:summary>Christina Agapakis, the creative director of Ginkgo Bioworks and one of the world’s first biodesigners, discusses biotechnology and feminism. 

In the early 2000s, a group of scientists from outside mainstream biology proposed that they would make living things behave like computers. They would treat DNA like command code; they would make cells behave with Boolean logic; and ultimately they would make life programmable. They called their field synthetic biology. Since its inception, synthetic biology has influenced the practice biological research, current understanding of biological systems, and the biotech economy— by 2019 the global synthetic biology market is projected to be worth $13.4 billion.

ABOUT THE SERIES

The Biotech Futures Talk + Lab Series explores the implications of and ways in which biology is becoming a data science. Each talk is paired with a 3-4 hour lab workshop at Genspace for Data &amp; Society and Genspace community members to demonstrate how these themes become realized in the lab.

Christina Agapakis is creative director of Ginkgo Bioworks, a biological design company growing cultured products for partners across many industries. Her work brings together biologists, engineers, designers, artists, and social scientists to explore the future of biotechnology. During her PhD at Harvard, she worked on producing hydrogen fuel in bacteria and making photosynthetic animals. She has taught designers at the Art Center College of Design and biomolecular engineers at UCLA, and she once made cheese using bacteria from the human body.</itunes:summary>
      <itunes:subtitle>Christina Agapakis, the creative director of Ginkgo Bioworks and one of the world’s first biodesigners, discusses biotechnology and feminism. 

In the early 2000s, a group of scientists from outside mainstream biology proposed that they would make living things behave like computers. They would treat DNA like command code; they would make cells behave with Boolean logic; and ultimately they would make life programmable. They called their field synthetic biology. Since its inception, synthetic biology has influenced the practice biological research, current understanding of biological systems, and the biotech economy— by 2019 the global synthetic biology market is projected to be worth $13.4 billion.

ABOUT THE SERIES

The Biotech Futures Talk + Lab Series explores the implications of and ways in which biology is becoming a data science. Each talk is paired with a 3-4 hour lab workshop at Genspace for Data &amp; Society and Genspace community members to demonstrate how these themes become realized in the lab.

Christina Agapakis is creative director of Ginkgo Bioworks, a biological design company growing cultured products for partners across many industries. Her work brings together biologists, engineers, designers, artists, and social scientists to explore the future of biotechnology. During her PhD at Harvard, she worked on producing hydrogen fuel in bacteria and making photosynthetic animals. She has taught designers at the Art Center College of Design and biomolecular engineers at UCLA, and she once made cheese using bacteria from the human body.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>24</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-03-22t07:18:54+00:00-f688a0e5795b64a</guid>
      <title>BioTech Futures Part 1</title>
      <description><![CDATA[<p>Tom Knight, considered the “godfather” of synthetic biology, discusses the origin of the scientific field and how it has evolved.</p>
<p>In the early 2000s, a group of scientists from outside mainstream biology proposed that they would make living things behave like computers. They would treat DNA like command code; they would make cells behave with Boolean logic; and ultimately they would make life programmable. They called their field synthetic biology. Since its inception, synthetic biology has influenced the practice biological research, current understanding of biological systems, and the biotech economy— by 2019 the global synthetic biology market is projected to be worth $13.4 billion.</p>
<p>Tom Knight spent most of his career teaching computer science and electrical engineering at MIT, before playing the major role in creating the engineering discipline of synthetic biology. In 1996 he seeded interest in the field at DARPA, and built a molecular biology laboratory in the MIT computer science department. He developed important standards for engineering biological systems, specifically Biobricks, the first standard assembly technique for functional DNA components, and in establishing the MIT Registry of Standard Biological Parts.</p>
<p>He was one of four founders of IGEM, an international competition between undergraduate teams to design and build biological systems, now hosting 300 teams across the globe. In 2008, he co-founded Ginkgo Bioworks, where he remains a full time researcher. His interests include minimal organisms, origins of life, and predictive models of biological systems. He is a Fellow of the American Association for the Advancement of Science, a director of the IGEM Foundation, and member of the International Committee on the Taxonamy of the Mollicutes.</p>
]]></description>
      <pubDate>Wed, 22 Mar 2017 17:39:04 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Tom Knight, considered the “godfather” of synthetic biology, discusses the origin of the scientific field and how it has evolved.</p>
<p>In the early 2000s, a group of scientists from outside mainstream biology proposed that they would make living things behave like computers. They would treat DNA like command code; they would make cells behave with Boolean logic; and ultimately they would make life programmable. They called their field synthetic biology. Since its inception, synthetic biology has influenced the practice biological research, current understanding of biological systems, and the biotech economy— by 2019 the global synthetic biology market is projected to be worth $13.4 billion.</p>
<p>Tom Knight spent most of his career teaching computer science and electrical engineering at MIT, before playing the major role in creating the engineering discipline of synthetic biology. In 1996 he seeded interest in the field at DARPA, and built a molecular biology laboratory in the MIT computer science department. He developed important standards for engineering biological systems, specifically Biobricks, the first standard assembly technique for functional DNA components, and in establishing the MIT Registry of Standard Biological Parts.</p>
<p>He was one of four founders of IGEM, an international competition between undergraduate teams to design and build biological systems, now hosting 300 teams across the globe. In 2008, he co-founded Ginkgo Bioworks, where he remains a full time researcher. His interests include minimal organisms, origins of life, and predictive models of biological systems. He is a Fellow of the American Association for the Advancement of Science, a director of the IGEM Foundation, and member of the International Committee on the Taxonamy of the Mollicutes.</p>
]]></content:encoded>
      <enclosure length="26982036" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/c3c5b438-7fda-4544-a3e8-18f0ea83eefa/ep0026_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>BioTech Futures Part 1</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:28:03</itunes:duration>
      <itunes:summary>Tom Knight, considered the “godfather” of synthetic biology, discusses the origin of the scientific field and how it has evolved.

In the early 2000s, a group of scientists from outside mainstream biology proposed that they would make living things behave like computers. They would treat DNA like command code; they would make cells behave with Boolean logic; and ultimately they would make life programmable. They called their field synthetic biology. Since its inception, synthetic biology has influenced the practice biological research, current understanding of biological systems, and the biotech economy— by 2019 the global synthetic biology market is projected to be worth $13.4 billion.

Tom Knight spent most of his career teaching computer science and electrical engineering at MIT, before playing the major role in creating the engineering discipline of synthetic biology. In 1996 he seeded interest in the field at DARPA, and built a molecular biology laboratory in the MIT computer science department. He developed important standards for engineering biological systems, specifically Biobricks, the first standard assembly technique for functional DNA components, and in establishing the MIT Registry of Standard Biological Parts.

He was one of four founders of IGEM, an international competition between undergraduate teams to design and build biological systems, now hosting 300 teams across the globe. In 2008, he co-founded Ginkgo Bioworks, where he remains a full time researcher. His interests include minimal organisms, origins of life, and predictive models of biological systems. He is a Fellow of the American Association for the Advancement of Science, a director of the IGEM Foundation, and member of the International Committee on the Taxonamy of the Mollicutes.</itunes:summary>
      <itunes:subtitle>Tom Knight, considered the “godfather” of synthetic biology, discusses the origin of the scientific field and how it has evolved.

In the early 2000s, a group of scientists from outside mainstream biology proposed that they would make living things behave like computers. They would treat DNA like command code; they would make cells behave with Boolean logic; and ultimately they would make life programmable. They called their field synthetic biology. Since its inception, synthetic biology has influenced the practice biological research, current understanding of biological systems, and the biotech economy— by 2019 the global synthetic biology market is projected to be worth $13.4 billion.

Tom Knight spent most of his career teaching computer science and electrical engineering at MIT, before playing the major role in creating the engineering discipline of synthetic biology. In 1996 he seeded interest in the field at DARPA, and built a molecular biology laboratory in the MIT computer science department. He developed important standards for engineering biological systems, specifically Biobricks, the first standard assembly technique for functional DNA components, and in establishing the MIT Registry of Standard Biological Parts.

He was one of four founders of IGEM, an international competition between undergraduate teams to design and build biological systems, now hosting 300 teams across the globe. In 2008, he co-founded Ginkgo Bioworks, where he remains a full time researcher. His interests include minimal organisms, origins of life, and predictive models of biological systems. He is a Fellow of the American Association for the Advancement of Science, a director of the IGEM Foundation, and member of the International Committee on the Taxonamy of the Mollicutes.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>23</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-03-15t04:03:05+00:00-8ab2a88470aa472</guid>
      <title>Online Harassment, Risky Research, and Activism</title>
      <description><![CDATA[<p>Amanda Lenhart is a Senior Research Scientist at the Associated Press-NORC Center for Public Affairs Research. Amanda was formerly a Researcher at the Data & Society Research Institute. At Data & Society, she led a Digital Trust Foundation-funded project examining the prevalence of cyberstalking and digital domestic abuse in the United States. Amanda has also been involved in Knight Foundation study on youth and mobile news consumption at Data & Society, as well as working on outside projects on the educational technology ecosystem of very young children in Silicon Valley and on paid and unpaid family leave for caregivers.</p>
<p>Alice E. Marwick is Director of the McGannon Communication Research Center and Assistant Professor of Communication and Media Studies at Fordham University. She is also a fellow at Data & Society. Her work examines the legal, political, and social implications of popular social media technologies. She is the author of Status Update: Celebrity, Publicity and Branding in the Social Media Age which examines how people seek online status through attention and visibility. She has written for The New York Times, The New York Review of Books, Wired, and The Guardian, as well as many academic publications. Alice has a PhD from the Department of Media, Culture and Communication at New York University.</p>
<p>Zara Rahman is a feminist and information activist who has worked in over twenty countries in the field of information accessibility and data use among civil society. She is Research Lead at the engine room, a non-profit organization supporting the use of technology and data in advocacy. She is a fellow at Data & Society where her research looks at the role of people who bridge gaps between activists and technologists and facilitate more responsible and effective use of data and technology in activism.</p>
<p>Related links:<br />
Best Practices for Conducting Risky Research</p>
]]></description>
      <pubDate>Wed, 15 Mar 2017 13:29:27 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Amanda Lenhart is a Senior Research Scientist at the Associated Press-NORC Center for Public Affairs Research. Amanda was formerly a Researcher at the Data & Society Research Institute. At Data & Society, she led a Digital Trust Foundation-funded project examining the prevalence of cyberstalking and digital domestic abuse in the United States. Amanda has also been involved in Knight Foundation study on youth and mobile news consumption at Data & Society, as well as working on outside projects on the educational technology ecosystem of very young children in Silicon Valley and on paid and unpaid family leave for caregivers.</p>
<p>Alice E. Marwick is Director of the McGannon Communication Research Center and Assistant Professor of Communication and Media Studies at Fordham University. She is also a fellow at Data & Society. Her work examines the legal, political, and social implications of popular social media technologies. She is the author of Status Update: Celebrity, Publicity and Branding in the Social Media Age which examines how people seek online status through attention and visibility. She has written for The New York Times, The New York Review of Books, Wired, and The Guardian, as well as many academic publications. Alice has a PhD from the Department of Media, Culture and Communication at New York University.</p>
<p>Zara Rahman is a feminist and information activist who has worked in over twenty countries in the field of information accessibility and data use among civil society. She is Research Lead at the engine room, a non-profit organization supporting the use of technology and data in advocacy. She is a fellow at Data & Society where her research looks at the role of people who bridge gaps between activists and technologists and facilitate more responsible and effective use of data and technology in activism.</p>
<p>Related links:<br />
Best Practices for Conducting Risky Research</p>
]]></content:encoded>
      <enclosure length="28131330" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/d106f2c7-ef4c-4076-81ed-75f00a130b28/ep0025_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Online Harassment, Risky Research, and Activism</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:29:15</itunes:duration>
      <itunes:summary>Amanda Lenhart, Alice Marwick, &amp; Zara Rahman on the prevalence online harassment and the potential effects on research and activism.</itunes:summary>
      <itunes:subtitle>Amanda Lenhart, Alice Marwick, &amp; Zara Rahman on the prevalence online harassment and the potential effects on research and activism.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>22</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-02-27t19:04:32+00:00-f7a11feb1e9af20</guid>
      <title>How the Chinese Government Fabricates Social Media Posts</title>
      <description><![CDATA[<p>Jennifer Pan: The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called “50c party” posts vociferously argue for the government’s side in political and policy debates. Jennifer’s research shows that this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity.</p>
<p>In the first large scale empirical analysis of this operation, Jennifer’s research reveals how to identify the secretive authors of these posts, the posts written by them, and their content. She and her team estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, her research shows that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. Her work infers that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. She will discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.</p>
<p>Jennifer Pan is an Assistant Professor of Communication, Assistant Professor, by courtesy, of Political Science and of Sociology at Stanford University. Her research focuses on the politics of authoritarian (non-democratic) countries in the digital age. How autocrats constrain collective action through online censorship, propaganda, and responsiveness. How information proliferation influences the ability of authoritarian regimes to collect reliable information. How public preferences are arranged and formed. She combines experimental and computational methods with large-scale datasets on political activity in China and other authoritarian regimes to examine these questions. Her work has appeared in peer-reviewed journals such as the American Political Science Review, American Journal of Political Science, and Science. She received her Ph.D. from Harvard University’s Department of Government in 2015. She graduated from Princeton University, summa cum laude, in 2004, and until 2009, she was a consultant at McKinsey &amp; Company based in New York and Beijing.</p>
]]></description>
      <pubDate>Tue, 28 Feb 2017 15:24:01 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Jennifer Pan: The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called “50c party” posts vociferously argue for the government’s side in political and policy debates. Jennifer’s research shows that this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity.</p>
<p>In the first large scale empirical analysis of this operation, Jennifer’s research reveals how to identify the secretive authors of these posts, the posts written by them, and their content. She and her team estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, her research shows that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. Her work infers that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. She will discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.</p>
<p>Jennifer Pan is an Assistant Professor of Communication, Assistant Professor, by courtesy, of Political Science and of Sociology at Stanford University. Her research focuses on the politics of authoritarian (non-democratic) countries in the digital age. How autocrats constrain collective action through online censorship, propaganda, and responsiveness. How information proliferation influences the ability of authoritarian regimes to collect reliable information. How public preferences are arranged and formed. She combines experimental and computational methods with large-scale datasets on political activity in China and other authoritarian regimes to examine these questions. Her work has appeared in peer-reviewed journals such as the American Political Science Review, American Journal of Political Science, and Science. She received her Ph.D. from Harvard University’s Department of Government in 2015. She graduated from Princeton University, summa cum laude, in 2004, and until 2009, she was a consultant at McKinsey &amp; Company based in New York and Beijing.</p>
]]></content:encoded>
      <enclosure length="28778129" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/6f6d5ff7-5951-406d-ae7d-d7bd00458264/ep0024_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>How the Chinese Government Fabricates Social Media Posts</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:29:54</itunes:duration>
      <itunes:summary>Jennifer Pan: The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called “50c party” posts vociferously argue for the government’s side in political and policy debates. Jennifer’s research shows that this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity.

In the first large scale empirical analysis of this operation, Jennifer’s research reveals how to identify the secretive authors of these posts, the posts written by them, and their content. She and her team estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, her research shows that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. Her work infers that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. She will discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.

Jennifer Pan is an Assistant Professor of Communication, Assistant Professor, by courtesy, of Political Science and of Sociology at Stanford University. Her research focuses on the politics of authoritarian (non-democratic) countries in the digital age. How autocrats constrain collective action through online censorship, propaganda, and responsiveness. How information proliferation influences the ability of authoritarian regimes to collect reliable information. How public preferences are arranged and formed. She combines experimental and computational methods with large-scale datasets on political activity in China and other authoritarian regimes to examine these questions. Her work has appeared in peer-reviewed journals such as the American Political Science Review, American Journal of Political Science, and Science. She received her Ph.D. from Harvard University’s Department of Government in 2015. She graduated from Princeton University, summa cum laude, in 2004, and until 2009, she was a consultant at McKinsey &amp; Company based in New York and Beijing.</itunes:summary>
      <itunes:subtitle>Jennifer Pan: The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called “50c party” posts vociferously argue for the government’s side in political and policy debates. Jennifer’s research shows that this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime’s strategic objective in pursuing this activity.

In the first large scale empirical analysis of this operation, Jennifer’s research reveals how to identify the secretive authors of these posts, the posts written by them, and their content. She and her team estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, her research shows that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. Her work infers that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. She will discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of “common knowledge” and information control in authoritarian regimes.

Jennifer Pan is an Assistant Professor of Communication, Assistant Professor, by courtesy, of Political Science and of Sociology at Stanford University. Her research focuses on the politics of authoritarian (non-democratic) countries in the digital age. How autocrats constrain collective action through online censorship, propaganda, and responsiveness. How information proliferation influences the ability of authoritarian regimes to collect reliable information. How public preferences are arranged and formed. She combines experimental and computational methods with large-scale datasets on political activity in China and other authoritarian regimes to examine these questions. Her work has appeared in peer-reviewed journals such as the American Political Science Review, American Journal of Political Science, and Science. She received her Ph.D. from Harvard University’s Department of Government in 2015. She graduated from Princeton University, summa cum laude, in 2004, and until 2009, she was a consultant at McKinsey &amp; Company based in New York and Beijing.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>21</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-02-21t16:44:07+00:00-4f5e735f8ed98e9</guid>
      <title>Data Science from Wall Street to Startups to Academic Biomedicine</title>
      <description><![CDATA[<p>Jeff Hammerbacher gives an overview of his work at Hammer Lab where he and his colleagues use data science to understand and improve the immune response to cancer. He also discusses the design of Hammer Lab, particularly focusing on ways that the lab is directly informed and motivated by his prior work experience at Bear Stearns, Facebook, and Cloudera.</p>
<p>Jeff is an Assistant Professor at the Medical University of South Carolina and the Icahn School of Medicine at Mount Sinai, a founder and the Chief Scientist of Cloudera, an angel investor with his wife Halle Tecco at Techammer, and a board member of CIOX Health and Sage Bionetworks. Jeff was an Entrepreneur in Residence at Accel Partners immediately prior to founding Cloudera. Before Accel, he conceived, built, and led the Data team at Facebook. Before joining Facebook, Jeff was a quantitative analyst on Wall Street. Jeff earned his Bachelor’s Degree in Mathematics from Harvard University.</p>
]]></description>
      <pubDate>Tue, 21 Feb 2017 18:28:55 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Jeff Hammerbacher gives an overview of his work at Hammer Lab where he and his colleagues use data science to understand and improve the immune response to cancer. He also discusses the design of Hammer Lab, particularly focusing on ways that the lab is directly informed and motivated by his prior work experience at Bear Stearns, Facebook, and Cloudera.</p>
<p>Jeff is an Assistant Professor at the Medical University of South Carolina and the Icahn School of Medicine at Mount Sinai, a founder and the Chief Scientist of Cloudera, an angel investor with his wife Halle Tecco at Techammer, and a board member of CIOX Health and Sage Bionetworks. Jeff was an Entrepreneur in Residence at Accel Partners immediately prior to founding Cloudera. Before Accel, he conceived, built, and led the Data team at Facebook. Before joining Facebook, Jeff was a quantitative analyst on Wall Street. Jeff earned his Bachelor’s Degree in Mathematics from Harvard University.</p>
]]></content:encoded>
      <enclosure length="46854465" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/f8c0b2d8-76e8-4648-b23a-ffdd68acfe64/ep0023_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Data Science from Wall Street to Startups to Academic Biomedicine</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:48:45</itunes:duration>
      <itunes:summary>Jeff Hammerbacher gives an overview of his work at Hammer Lab where he and his colleagues use data science to understand and improve the immune response to cancer. He also discusses the design of Hammer Lab, particularly focusing on ways that the lab is directly informed and motivated by his prior work experience at Bear Stearns, Facebook, and Cloudera.

Jeff is an Assistant Professor at the Medical University of South Carolina and the Icahn School of Medicine at Mount Sinai, a founder and the Chief Scientist of Cloudera, an angel investor with his wife Halle Tecco at Techammer, and a board member of CIOX Health and Sage Bionetworks. Jeff was an Entrepreneur in Residence at Accel Partners immediately prior to founding Cloudera. Before Accel, he conceived, built, and led the Data team at Facebook. Before joining Facebook, Jeff was a quantitative analyst on Wall Street. Jeff earned his Bachelor’s Degree in Mathematics from Harvard University.</itunes:summary>
      <itunes:subtitle>Jeff Hammerbacher gives an overview of his work at Hammer Lab where he and his colleagues use data science to understand and improve the immune response to cancer. He also discusses the design of Hammer Lab, particularly focusing on ways that the lab is directly informed and motivated by his prior work experience at Bear Stearns, Facebook, and Cloudera.

Jeff is an Assistant Professor at the Medical University of South Carolina and the Icahn School of Medicine at Mount Sinai, a founder and the Chief Scientist of Cloudera, an angel investor with his wife Halle Tecco at Techammer, and a board member of CIOX Health and Sage Bionetworks. Jeff was an Entrepreneur in Residence at Accel Partners immediately prior to founding Cloudera. Before Accel, he conceived, built, and led the Data team at Facebook. Before joining Facebook, Jeff was a quantitative analyst on Wall Street. Jeff earned his Bachelor’s Degree in Mathematics from Harvard University.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>20</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-02-14t19:18:44+00:00-3c0429e2c1842d0</guid>
      <title>Privacy in the Era of Personal Genomics</title>
      <description><![CDATA[<p>PANELISTS:</p>
<p><a href="https://www.linkedin.com/in/jasonbobe">JASON BOBE</a></p>
<p>Jason Bobe is Associate Professor and Director of the Sharing Lab at Icahn Institute at Mount Sinai. For the past 10 years, Jason has been at the forefront of innovative data sharing practices in health research. His work on the Personal Genome Project at Harvard, and now three other countries, pioneered new approaches for creating well-consented public data, cell lines and other open resources. These efforts led to important changes in the governance of identifiable health data and also led to the development of valuable new products, such as NIST’s standardized human genome reference materials (e.g. NIST RM 8392), now used for calibrating clinical laboratory equipment worldwide.</p>
<p>More recently, he co-founded Open Humans, a platform that facilitates participant-centered data sharing between individuals and the health research community. At the Sharing Lab, he attempts to produce health research studies that people actually want to join and works on improving our understanding of how to make great, impactful studies capable of engaging the general public and achieving social good. He is alsothe leader of the Resilience Project, an effort leveraging open science approaches to identify and learn how some people are able avoid disease despite having serious risk factors. Last year, he was selected to be in the inaugural class of Mozilla Open Science Fellows. He is also co-founder of two nonprofits: Open Humans Foundation and DIYbio.org.</p>
<p><a href="https://www.linkedin.com/in/sophie-zaaijer-a21b691">SOPHIE ZAAIJER</a></p>
<p>Dr. Sophie Zaaijer is a Postdoctoral Researcher in the Erlich’s lab at the New York Genome Center and Columbia University. Sophie is from the Netherlands, where she did her undergraduate in Music (viola) and Food Technology. For her Masters, she studied Medical Biotechnology at Wageningen University and went to Harvard Medical School to finish her thesis work in Monica Colaiacovo’s lab. She next went on to do a PhD in Molecular Biology and Genetics in Julie Cooper’s lab at Cancer Research UK, London (now the Crick Institute) and at the National Institutes of Health, Bethesda. Sophie focuses on genome technology and the growing impact of genomics on our daily lives.</p>
<p>MODERATOR:</p>
<p><a href="https://datasociety.net/people/dewey-hagborg-heather/">HEATHER DEWEY-HAGBORG</a></p>
<p>Heather Dewey-Hagborg is a transdisciplinary artist and educator who is interested in art as research and critical practice. Her controversial biopolitical art practice includes Stranger Visions in which she created portrait sculptures from analyses of genetic material (hair, cigarette butts, chewed up gum) collected in public places.</p>
<p>Heather has shown work internationally at events and venues including the World Economic Forum, Shenzhen Urbanism and Architecture Biennale, the New Museum, and PS1 MOMA. Her work has been widely discussed in the media, from the New York Times and the BBC to TED and Wired.<br />
She is an Assistant Professor of Art and Technology Studies at the School of the Art Institute of Chicago and a 2016 Creative Capital award grantee in the area of Emerging Fields.</p>
<p>INTRODUCTION:</p>
<p><a href="https://datasociety.net/people/grushkin-daniel/">DANIEL GRUSHKIN</a></p>
<p>Daniel Grushkin is founder of the Biodesign Challenge, an international university competition that asks students to envision future applications of biotech. He is co-founder and Cultural Programs Director of Genspace, a nonprofit community laboratory dedicated to promoting citizen science and access to biotechnology. Fast Company ranked Genspace fourth among the top 10 most innovative education companies in the world.</p>
<p>Daniel is a Fellow at Data & Society. From 2013-2014, he was a fellow at the Woodrow Wilson International Center for Scholars where he researched synthetic biology. He was an Emerging Leader in Biosecurity at the UPMC Center of Health Security in 2014. As a journalist, he has reported on the intersection of biotechnology, culture, and business for publications including Bloomberg Businessweek, Fast Company, Scientific American and Popular Science.</p>
]]></description>
      <pubDate>Tue, 14 Feb 2017 22:15:35 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>PANELISTS:</p>
<p><a href="https://www.linkedin.com/in/jasonbobe">JASON BOBE</a></p>
<p>Jason Bobe is Associate Professor and Director of the Sharing Lab at Icahn Institute at Mount Sinai. For the past 10 years, Jason has been at the forefront of innovative data sharing practices in health research. His work on the Personal Genome Project at Harvard, and now three other countries, pioneered new approaches for creating well-consented public data, cell lines and other open resources. These efforts led to important changes in the governance of identifiable health data and also led to the development of valuable new products, such as NIST’s standardized human genome reference materials (e.g. NIST RM 8392), now used for calibrating clinical laboratory equipment worldwide.</p>
<p>More recently, he co-founded Open Humans, a platform that facilitates participant-centered data sharing between individuals and the health research community. At the Sharing Lab, he attempts to produce health research studies that people actually want to join and works on improving our understanding of how to make great, impactful studies capable of engaging the general public and achieving social good. He is alsothe leader of the Resilience Project, an effort leveraging open science approaches to identify and learn how some people are able avoid disease despite having serious risk factors. Last year, he was selected to be in the inaugural class of Mozilla Open Science Fellows. He is also co-founder of two nonprofits: Open Humans Foundation and DIYbio.org.</p>
<p><a href="https://www.linkedin.com/in/sophie-zaaijer-a21b691">SOPHIE ZAAIJER</a></p>
<p>Dr. Sophie Zaaijer is a Postdoctoral Researcher in the Erlich’s lab at the New York Genome Center and Columbia University. Sophie is from the Netherlands, where she did her undergraduate in Music (viola) and Food Technology. For her Masters, she studied Medical Biotechnology at Wageningen University and went to Harvard Medical School to finish her thesis work in Monica Colaiacovo’s lab. She next went on to do a PhD in Molecular Biology and Genetics in Julie Cooper’s lab at Cancer Research UK, London (now the Crick Institute) and at the National Institutes of Health, Bethesda. Sophie focuses on genome technology and the growing impact of genomics on our daily lives.</p>
<p>MODERATOR:</p>
<p><a href="https://datasociety.net/people/dewey-hagborg-heather/">HEATHER DEWEY-HAGBORG</a></p>
<p>Heather Dewey-Hagborg is a transdisciplinary artist and educator who is interested in art as research and critical practice. Her controversial biopolitical art practice includes Stranger Visions in which she created portrait sculptures from analyses of genetic material (hair, cigarette butts, chewed up gum) collected in public places.</p>
<p>Heather has shown work internationally at events and venues including the World Economic Forum, Shenzhen Urbanism and Architecture Biennale, the New Museum, and PS1 MOMA. Her work has been widely discussed in the media, from the New York Times and the BBC to TED and Wired.<br />
She is an Assistant Professor of Art and Technology Studies at the School of the Art Institute of Chicago and a 2016 Creative Capital award grantee in the area of Emerging Fields.</p>
<p>INTRODUCTION:</p>
<p><a href="https://datasociety.net/people/grushkin-daniel/">DANIEL GRUSHKIN</a></p>
<p>Daniel Grushkin is founder of the Biodesign Challenge, an international university competition that asks students to envision future applications of biotech. He is co-founder and Cultural Programs Director of Genspace, a nonprofit community laboratory dedicated to promoting citizen science and access to biotechnology. Fast Company ranked Genspace fourth among the top 10 most innovative education companies in the world.</p>
<p>Daniel is a Fellow at Data & Society. From 2013-2014, he was a fellow at the Woodrow Wilson International Center for Scholars where he researched synthetic biology. He was an Emerging Leader in Biosecurity at the UPMC Center of Health Security in 2014. As a journalist, he has reported on the intersection of biotechnology, culture, and business for publications including Bloomberg Businessweek, Fast Company, Scientific American and Popular Science.</p>
]]></content:encoded>
      <enclosure length="51802532" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/426b401f-9404-4442-a48a-744c2f5578b1/ep0022_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Privacy in the Era of Personal Genomics</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:53:51</itunes:duration>
      <itunes:summary>Jason Bobe, Sophie Zaaijer, Heather Dewey-Hagborg,  Daniel Grushkin - Genomics, the collection and interpretation of DNA sequences, has long promised to change the way doctors practice medicine, scientists research disease and the environment, and ultimately the way we understand ourselves. In the past, reading DNA was slow, laborious, and expensive. Reading the first human genome cost $3 billion and took 13 years to complete in 2003. Today, that same genome could be read for roughly $1,000 in a few hours. And a gene sequencer, once a lumbering machine, can now fit into the palm of a hand.

In less than a decade, the practice of genomics has become ubiquitous, and the data sets enormous. Its wide adoption comes barbed with ethical challenges, tensions between scientific progress and individual privacy, and a heritage based in racial discrimination. 

Panelists: Jason Bobe, Sophie Zaaijer
Moderator: Heather Dewey-Hagborg
Introduced by: Daniel Grushkin
Presented by: Genspace and Data &amp; Society

ABOUT THE SERIES

The Biotech Futures Talk + Lab Series explores the implications of and ways in which biology is becoming a data science. Each talk is paired with a 3-4 hour lab workshop at Genspace for Data &amp; Society and Genspace community members to demonstrate how these themes become realized in the lab. Lab details to follow.</itunes:summary>
      <itunes:subtitle>Jason Bobe, Sophie Zaaijer, Heather Dewey-Hagborg,  Daniel Grushkin - Genomics, the collection and interpretation of DNA sequences, has long promised to change the way doctors practice medicine, scientists research disease and the environment, and ultimately the way we understand ourselves. In the past, reading DNA was slow, laborious, and expensive. Reading the first human genome cost $3 billion and took 13 years to complete in 2003. Today, that same genome could be read for roughly $1,000 in a few hours. And a gene sequencer, once a lumbering machine, can now fit into the palm of a hand.

In less than a decade, the practice of genomics has become ubiquitous, and the data sets enormous. Its wide adoption comes barbed with ethical challenges, tensions between scientific progress and individual privacy, and a heritage based in racial discrimination. 

Panelists: Jason Bobe, Sophie Zaaijer
Moderator: Heather Dewey-Hagborg
Introduced by: Daniel Grushkin
Presented by: Genspace and Data &amp; Society

ABOUT THE SERIES

The Biotech Futures Talk + Lab Series explores the implications of and ways in which biology is becoming a data science. Each talk is paired with a 3-4 hour lab workshop at Genspace for Data &amp; Society and Genspace community members to demonstrate how these themes become realized in the lab. Lab details to follow.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>19</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-12-05t19:25:28+00:00-f06c41112d4b967</guid>
      <title>Living and Learning in the Digital Age</title>
      <description><![CDATA[<p>Sonia Livingstone on where and why do digital media – and digital media learning – fit into the lives of young teenagers living in complex urban societies? Do they help build valued connections, or enhance opportunities to create, learn and participate? Or do they lead to hyper-connection, surveillance and loss of privacy for young people? Reflecting on a year’s ethnography (free to read at http://connectedyouth.nyupress.org/) with a class of 13 year olds, exploring their sites of living and learning online and offline, Sonia argues that their understandable desire for ‘positive disconnections’ means crucial opportunities to learn are being missed. These might be overcome with a more child-centered or even child-rights approach to the digital age.</p>
]]></description>
      <pubDate>Tue, 7 Feb 2017 22:07:20 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Sonia Livingstone on where and why do digital media – and digital media learning – fit into the lives of young teenagers living in complex urban societies? Do they help build valued connections, or enhance opportunities to create, learn and participate? Or do they lead to hyper-connection, surveillance and loss of privacy for young people? Reflecting on a year’s ethnography (free to read at http://connectedyouth.nyupress.org/) with a class of 13 year olds, exploring their sites of living and learning online and offline, Sonia argues that their understandable desire for ‘positive disconnections’ means crucial opportunities to learn are being missed. These might be overcome with a more child-centered or even child-rights approach to the digital age.</p>
]]></content:encoded>
      <enclosure length="44177409" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/2bb7126d-7749-4773-9668-cc9cea3eceaf/ep0016_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Living and Learning in the Digital Age</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:45:58</itunes:duration>
      <itunes:summary>Sonia Livingstone on where and why do digital media – and digital media learning – fit into the lives of young teenagers living in complex urban societies? Do they help build valued connections, or enhance opportunities to create, learn and participate? Or do they lead to hyper-connection, surveillance and loss of privacy for young people? Reflecting on a year’s ethnography (free to read at http://connectedyouth.nyupress.org/) with a class of 13 year olds, exploring their sites of living and learning online and offline, Sonia argues that their understandable desire for ‘positive disconnections’ means crucial opportunities to learn are being missed. These might be overcome with a more child-centered or even child-rights approach to the digital age.</itunes:summary>
      <itunes:subtitle>Sonia Livingstone on where and why do digital media – and digital media learning – fit into the lives of young teenagers living in complex urban societies? Do they help build valued connections, or enhance opportunities to create, learn and participate? Or do they lead to hyper-connection, surveillance and loss of privacy for young people? Reflecting on a year’s ethnography (free to read at http://connectedyouth.nyupress.org/) with a class of 13 year olds, exploring their sites of living and learning online and offline, Sonia argues that their understandable desire for ‘positive disconnections’ means crucial opportunities to learn are being missed. These might be overcome with a more child-centered or even child-rights approach to the digital age.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>18</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-07t18:41:37+00:00-6f8ed3966c1a413</guid>
      <title>The Messy Realities of Digital Schooling</title>
      <description><![CDATA[<p>In this Databite, Neil Selwyn works through some emerging headline findings from a new three year study of digital technology use in Australian high schools. In particular Neil highlights the ways in which schools’ actual uses of technology often contradict presumptions of ‘connected learning’, ‘digital education’ and the like. Instead Neil considers…</p>
<p>• how and why recent innovations such as maker culture, personalised learning and data-driven education are subsumed within more restrictive institutional ‘logics’;<br />
• the tensions of ‘bring your own device’ and other permissive digital learning practices<br />
• how alternative and resistant forms of technology use by students tend to mitigate <em>against</em> educational engagement and/or learning gains;<br />
• the ways in which digital technologies enhance (rather than disrupt) existing forms of advantage and privilege amongst groups of students;<br />
• how the distributed nature of technology leadership and innovation throughout schools tends to restrict widespread institutional change and reform;<br />
• the ambiguous role that digital technologies play in teachers’ work and the labor of teaching;<br />
• the often surprising ways that technology seems to take hold throughout schools – echoing broader imperatives of accountability, surveillance and control.</p>
<p>The talk provides plenty of scope to consider how technology use in schools might be ‘otherwise’, and alternate agendas to be pursued by educators, policymakers, technology developers and other stakeholders in the ed-tech space.</p>
]]></description>
      <pubDate>Tue, 7 Feb 2017 22:06:01 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>In this Databite, Neil Selwyn works through some emerging headline findings from a new three year study of digital technology use in Australian high schools. In particular Neil highlights the ways in which schools’ actual uses of technology often contradict presumptions of ‘connected learning’, ‘digital education’ and the like. Instead Neil considers…</p>
<p>• how and why recent innovations such as maker culture, personalised learning and data-driven education are subsumed within more restrictive institutional ‘logics’;<br />
• the tensions of ‘bring your own device’ and other permissive digital learning practices<br />
• how alternative and resistant forms of technology use by students tend to mitigate <em>against</em> educational engagement and/or learning gains;<br />
• the ways in which digital technologies enhance (rather than disrupt) existing forms of advantage and privilege amongst groups of students;<br />
• how the distributed nature of technology leadership and innovation throughout schools tends to restrict widespread institutional change and reform;<br />
• the ambiguous role that digital technologies play in teachers’ work and the labor of teaching;<br />
• the often surprising ways that technology seems to take hold throughout schools – echoing broader imperatives of accountability, surveillance and control.</p>
<p>The talk provides plenty of scope to consider how technology use in schools might be ‘otherwise’, and alternate agendas to be pursued by educators, policymakers, technology developers and other stakeholders in the ed-tech space.</p>
]]></content:encoded>
      <enclosure length="36731815" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/c6da7a7e-a8df-4fa4-a635-e52cc352f6a0/ep0005_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>The Messy Realities of Digital Schooling</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:38:13</itunes:duration>
      <itunes:summary>In this Databite, Neil Selwyn works through some emerging headline findings from a new three year study of digital technology use in Australian high schools. In particular Neil highlights the ways in which schools’ actual uses of technology often contradict presumptions of ‘connected learning’, ‘digital education’ and the like. Instead Neil considers…

• how and why recent innovations such as maker culture, personalised learning and data-driven education are subsumed within more restrictive institutional ‘logics’;
• the tensions of ‘bring your own device’ and other permissive digital learning practices
• how alternative and resistant forms of technology use by students tend to mitigate *against* educational engagement and/or learning gains;
• the ways in which digital technologies enhance (rather than disrupt) existing forms of advantage and privilege amongst groups of students;
• how the distributed nature of technology leadership and innovation throughout schools tends to restrict widespread institutional change and reform;
• the ambiguous role that digital technologies play in teachers’ work and the labor of teaching;
• the often surprising ways that technology seems to take hold throughout schools – echoing broader imperatives of accountability, surveillance and control.

The talk provides plenty of scope to consider how technology use in schools might be ‘otherwise’, and alternate agendas to be pursued by educators, policymakers, technology developers and other stakeholders in the ed-tech space.</itunes:summary>
      <itunes:subtitle>In this Databite, Neil Selwyn works through some emerging headline findings from a new three year study of digital technology use in Australian high schools. In particular Neil highlights the ways in which schools’ actual uses of technology often contradict presumptions of ‘connected learning’, ‘digital education’ and the like. Instead Neil considers…

• how and why recent innovations such as maker culture, personalised learning and data-driven education are subsumed within more restrictive institutional ‘logics’;
• the tensions of ‘bring your own device’ and other permissive digital learning practices
• how alternative and resistant forms of technology use by students tend to mitigate *against* educational engagement and/or learning gains;
• the ways in which digital technologies enhance (rather than disrupt) existing forms of advantage and privilege amongst groups of students;
• how the distributed nature of technology leadership and innovation throughout schools tends to restrict widespread institutional change and reform;
• the ambiguous role that digital technologies play in teachers’ work and the labor of teaching;
• the often surprising ways that technology seems to take hold throughout schools – echoing broader imperatives of accountability, surveillance and control.

The talk provides plenty of scope to consider how technology use in schools might be ‘otherwise’, and alternate agendas to be pursued by educators, policymakers, technology developers and other stakeholders in the ed-tech space.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>17</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-28t19:01:16+00:00-10d8aa5132d28b1</guid>
      <title>Student Privacy and Big Data</title>
      <description><![CDATA[<p>Elana Zeide on Student Privacy and Big Data. With the rise of online learning environments, student records are no longer just basic academic and administrative information, but include data and metadata generated from student interaction with digital platforms as well as unexpected sources like student ID badges and social media. Applying big data analytics to this wealth of information has the potential to revolutionize education, but also risks unintended consequences that affect the core values of the education system as well as civil rights and liberties.</p>
<p>The current student privacy regulatory regime does not address the issues raised by modern information technology and data-driven decision-making in education.  This presentation highlights key issues of the student privacy debate, proposed reforms, and emerging legal and ethical issues, as well as implications of data-driven education environments and decision-making that extend far beyond school settings.</p>
]]></description>
      <pubDate>Tue, 7 Feb 2017 22:04:32 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Elana Zeide on Student Privacy and Big Data. With the rise of online learning environments, student records are no longer just basic academic and administrative information, but include data and metadata generated from student interaction with digital platforms as well as unexpected sources like student ID badges and social media. Applying big data analytics to this wealth of information has the potential to revolutionize education, but also risks unintended consequences that affect the core values of the education system as well as civil rights and liberties.</p>
<p>The current student privacy regulatory regime does not address the issues raised by modern information technology and data-driven decision-making in education.  This presentation highlights key issues of the student privacy debate, proposed reforms, and emerging legal and ethical issues, as well as implications of data-driven education environments and decision-making that extend far beyond school settings.</p>
]]></content:encoded>
      <enclosure length="26930846" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/8d57fce4-d77e-40f7-bcd3-e823de54f1ee/ep0011_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Student Privacy and Big Data</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:28:01</itunes:duration>
      <itunes:summary>Elana Zeide on Student Privacy and Big Data. With the rise of online learning environments, student records are no longer just basic academic and administrative information, but include data and metadata generated from student interaction with digital platforms as well as unexpected sources like student ID badges and social media. Applying big data analytics to this wealth of information has the potential to revolutionize education, but also risks unintended consequences that affect the core values of the education system as well as civil rights and liberties.

The current student privacy regulatory regime does not address the issues raised by modern information technology and data-driven decision-making in education.  This presentation highlights key issues of the student privacy debate, proposed reforms, and emerging legal and ethical issues, as well as implications of data-driven education environments and decision-making that extend far beyond school settings.</itunes:summary>
      <itunes:subtitle>Elana Zeide on Student Privacy and Big Data. With the rise of online learning environments, student records are no longer just basic academic and administrative information, but include data and metadata generated from student interaction with digital platforms as well as unexpected sources like student ID badges and social media. Applying big data analytics to this wealth of information has the potential to revolutionize education, but also risks unintended consequences that affect the core values of the education system as well as civil rights and liberties.

The current student privacy regulatory regime does not address the issues raised by modern information technology and data-driven decision-making in education.  This presentation highlights key issues of the student privacy debate, proposed reforms, and emerging legal and ethical issues, as well as implications of data-driven education environments and decision-making that extend far beyond school settings.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>16</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-01-30t18:32:37+00:00-2d160e17ddfa527</guid>
      <title>An AI Pattern Language: Accounting for Human Factors &amp; Human Frames</title>
      <description><![CDATA[<p>Madeleine Clare Elish presents “An AI Pattern Language,” coauthored with Tim Hwang.  The publication is the culmination of two years of research and conversations with a range of industry practitioners working in intelligent systems and artificial intelligence.  The work was supported by the John D. and Catherine T. MacArthur Foundation. You can purchase your own copy or download the PDF at autonomy.datasociety.net.</p>
]]></description>
      <pubDate>Mon, 30 Jan 2017 18:47:46 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Madeleine Clare Elish presents “An AI Pattern Language,” coauthored with Tim Hwang.  The publication is the culmination of two years of research and conversations with a range of industry practitioners working in intelligent systems and artificial intelligence.  The work was supported by the John D. and Catherine T. MacArthur Foundation. You can purchase your own copy or download the PDF at autonomy.datasociety.net.</p>
]]></content:encoded>
      <enclosure length="30967085" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/2d84bd1c-6ad8-4208-945b-de5459e0b090/ep0021_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>An AI Pattern Language: Accounting for Human Factors &amp; Human Frames</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:32:12</itunes:duration>
      <itunes:summary>Madeleine Clare Elish presents “An AI Pattern Language,” coauthored with Tim Hwang.  The publication is the culmination of two years of research and conversations with a range of industry practitioners working in intelligent systems and artificial intelligence.  The work was supported by the John D. and Catherine T. MacArthur Foundation. You can purchase your own copy or download the PDF at autonomy.datasociety.net.</itunes:summary>
      <itunes:subtitle>Madeleine Clare Elish presents “An AI Pattern Language,” coauthored with Tim Hwang.  The publication is the culmination of two years of research and conversations with a range of industry practitioners working in intelligent systems and artificial intelligence.  The work was supported by the John D. and Catherine T. MacArthur Foundation. You can purchase your own copy or download the PDF at autonomy.datasociety.net.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>15</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2017-01-30t16:15:18+00:00-dd4b32f78d11e64</guid>
      <title>Predictive Policing: Bias In, Bias Out</title>
      <description><![CDATA[<p>Kristian Lum will elaborate on the concept of “bias in, bias out” in machine learning with a simple, non-technical example.  She will then demonstrate how applying machine learning to police records can result in the over-policing of historically over-policed communities. Using a case study from Oakland, CA, she will show one specific case of how predictive policing not only perpetuates the biases that were previously encoded in the police data, but – under some circumstances – actually amplifies those biases.</p>
]]></description>
      <pubDate>Mon, 30 Jan 2017 16:56:51 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Kristian Lum will elaborate on the concept of “bias in, bias out” in machine learning with a simple, non-technical example.  She will then demonstrate how applying machine learning to police records can result in the over-policing of historically over-policed communities. Using a case study from Oakland, CA, she will show one specific case of how predictive policing not only perpetuates the biases that were previously encoded in the police data, but – under some circumstances – actually amplifies those biases.</p>
]]></content:encoded>
      <enclosure length="29449350" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/29ec56de-396d-4e66-ad3d-bd633fb6e246/ep0017_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Predictive Policing: Bias In, Bias Out</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:30:38</itunes:duration>
      <itunes:summary>Kristian Lum will elaborate on the concept of “bias in, bias out” in machine learning with a simple, non-technical example.  She will then demonstrate how applying machine learning to police records can result in the over-policing of historically over-policed communities. Using a case study from Oakland, CA, she will show one specific case of how predictive policing not only perpetuates the biases that were previously encoded in the police data, but – under some circumstances – actually amplifies those biases.</itunes:summary>
      <itunes:subtitle>Kristian Lum will elaborate on the concept of “bias in, bias out” in machine learning with a simple, non-technical example.  She will then demonstrate how applying machine learning to police records can result in the over-policing of historically over-policed communities. Using a case study from Oakland, CA, she will show one specific case of how predictive policing not only perpetuates the biases that were previously encoded in the police data, but – under some circumstances – actually amplifies those biases.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>14</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-28t18:19:32+00:00-7dedec8ef721a65</guid>
      <title>Social Dilemmas Around New Media</title>
      <description><![CDATA[<p>Ilana Gershon discusses how we all have moments in which someone’s use of new media baffles us, and we have to ask a friend how to respond. It often isn’t just the content of the message, it is also using that particular medium in that way which leaves us scratching our heads. In this talk, I discuss what anthropological concepts can help us understand our confusion. I will turn to LinkedIn as my case study and analyze the dilemmas people face when using LinkedIn as they look for a job. This will be my starting point to discuss how the newness of new media generates social dilemmas, especially for the people these days who are looking for a job.</p>
]]></description>
      <pubDate>Wed, 25 Jan 2017 17:41:55 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Ilana Gershon discusses how we all have moments in which someone’s use of new media baffles us, and we have to ask a friend how to respond. It often isn’t just the content of the message, it is also using that particular medium in that way which leaves us scratching our heads. In this talk, I discuss what anthropological concepts can help us understand our confusion. I will turn to LinkedIn as my case study and analyze the dilemmas people face when using LinkedIn as they look for a job. This will be my starting point to discuss how the newness of new media generates social dilemmas, especially for the people these days who are looking for a job.</p>
]]></content:encoded>
      <enclosure length="40114967" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/2ec7375c-a674-46e9-9dcf-ed7e853b2637/ep0009_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Social Dilemmas Around New Media</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:41:44</itunes:duration>
      <itunes:summary>Ilana Gershon discusses how we all have moments in which someone’s use of new media baffles us, and we have to ask a friend how to respond. It often isn’t just the content of the message, it is also using that particular medium in that way which leaves us scratching our heads. In this talk, I discuss what anthropological concepts can help us understand our confusion. I will turn to LinkedIn as my case study and analyze the dilemmas people face when using LinkedIn as they look for a job. This will be my starting point to discuss how the newness of new media generates social dilemmas, especially for the people these days who are looking for a job.</itunes:summary>
      <itunes:subtitle>Ilana Gershon discusses how we all have moments in which someone’s use of new media baffles us, and we have to ask a friend how to respond. It often isn’t just the content of the message, it is also using that particular medium in that way which leaves us scratching our heads. In this talk, I discuss what anthropological concepts can help us understand our confusion. I will turn to LinkedIn as my case study and analyze the dilemmas people face when using LinkedIn as they look for a job. This will be my starting point to discuss how the newness of new media generates social dilemmas, especially for the people these days who are looking for a job.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>13</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-28t19:51:09+00:00-a9abc001458ad97</guid>
      <title>Self-regulation in Sensor Society</title>
      <description><![CDATA[<p>Natasha Schüll – From the NSA scandal to Facebook’s controversial “mood experiment,” the past decade has seen heated debate over the ways that governments and corporations collect data on citizens and consumers, the ends to which they use it, and the threat this poses to civil liberties. Yet even as this discussion over surveillant monitoring unfolds, the public has embraced practices and products of self-tracking, applying sensor-laden patches, wristbands, and pendants to their own bodies.</p>
<p>Drawing on ethnographic fieldwork, this talk explores how mainstream self-tracking technologies – in their design, marketing, and use – increasingly part ways with the ethos of intensive self-attention found within the Quantified Self (QS) community, serving as digital compasses to guide consumers through the confounding, tempting, and sometimes toxic landscape of everyday choice making and lifestyle management (for instance, by regulating the micro-rhythms of their bites, steps, sips, and breaths). By offering them a way to fulfill the cultural demand for self-management while delegating the often tedious, sometimes existentially taxing labor involved in meeting that demand, such devices at once exemplify and short-circuit ideals of individual agency and responsibility.</p>
<p>In the story of self-tracking technology and its increasing automation, a certain ambivalence over the terms of contemporary selfhood comes to the fore. Are there any connections to be drawn between this ambivalence and broader debates over governmental and corporate surveillance, data privacy, and the possibility for resistance?</p>
]]></description>
      <pubDate>Wed, 25 Jan 2017 17:40:21 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Natasha Schüll – From the NSA scandal to Facebook’s controversial “mood experiment,” the past decade has seen heated debate over the ways that governments and corporations collect data on citizens and consumers, the ends to which they use it, and the threat this poses to civil liberties. Yet even as this discussion over surveillant monitoring unfolds, the public has embraced practices and products of self-tracking, applying sensor-laden patches, wristbands, and pendants to their own bodies.</p>
<p>Drawing on ethnographic fieldwork, this talk explores how mainstream self-tracking technologies – in their design, marketing, and use – increasingly part ways with the ethos of intensive self-attention found within the Quantified Self (QS) community, serving as digital compasses to guide consumers through the confounding, tempting, and sometimes toxic landscape of everyday choice making and lifestyle management (for instance, by regulating the micro-rhythms of their bites, steps, sips, and breaths). By offering them a way to fulfill the cultural demand for self-management while delegating the often tedious, sometimes existentially taxing labor involved in meeting that demand, such devices at once exemplify and short-circuit ideals of individual agency and responsibility.</p>
<p>In the story of self-tracking technology and its increasing automation, a certain ambivalence over the terms of contemporary selfhood comes to the fore. Are there any connections to be drawn between this ambivalence and broader debates over governmental and corporate surveillance, data privacy, and the possibility for resistance?</p>
]]></content:encoded>
      <enclosure length="40606475" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/1e2639a7-2800-4ac0-b334-232b1c6541f0/ep0012_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Self-regulation in Sensor Society</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:42:16</itunes:duration>
      <itunes:summary>Natasha Schüll – From the NSA scandal to Facebook’s controversial “mood experiment,” the past decade has seen heated debate over the ways that governments and corporations collect data on citizens and consumers, the ends to which they use it, and the threat this poses to civil liberties. Yet even as this discussion over surveillant monitoring unfolds, the public has embraced practices and products of self-tracking, applying sensor-laden patches, wristbands, and pendants to their own bodies.

Drawing on ethnographic fieldwork, this talk explores how mainstream self-tracking technologies – in their design, marketing, and use – increasingly part ways with the ethos of intensive self-attention found within the Quantified Self (QS) community, serving as digital compasses to guide consumers through the confounding, tempting, and sometimes toxic landscape of everyday choice making and lifestyle management (for instance, by regulating the micro-rhythms of their bites, steps, sips, and breaths). By offering them a way to fulfill the cultural demand for self-management while delegating the often tedious, sometimes existentially taxing labor involved in meeting that demand, such devices at once exemplify and short-circuit ideals of individual agency and responsibility.

In the story of self-tracking technology and its increasing automation, a certain ambivalence over the terms of contemporary selfhood comes to the fore. Are there any connections to be drawn between this ambivalence and broader debates over governmental and corporate surveillance, data privacy, and the possibility for resistance?</itunes:summary>
      <itunes:subtitle>Natasha Schüll – From the NSA scandal to Facebook’s controversial “mood experiment,” the past decade has seen heated debate over the ways that governments and corporations collect data on citizens and consumers, the ends to which they use it, and the threat this poses to civil liberties. Yet even as this discussion over surveillant monitoring unfolds, the public has embraced practices and products of self-tracking, applying sensor-laden patches, wristbands, and pendants to their own bodies.

Drawing on ethnographic fieldwork, this talk explores how mainstream self-tracking technologies – in their design, marketing, and use – increasingly part ways with the ethos of intensive self-attention found within the Quantified Self (QS) community, serving as digital compasses to guide consumers through the confounding, tempting, and sometimes toxic landscape of everyday choice making and lifestyle management (for instance, by regulating the micro-rhythms of their bites, steps, sips, and breaths). By offering them a way to fulfill the cultural demand for self-management while delegating the often tedious, sometimes existentially taxing labor involved in meeting that demand, such devices at once exemplify and short-circuit ideals of individual agency and responsibility.

In the story of self-tracking technology and its increasing automation, a certain ambivalence over the terms of contemporary selfhood comes to the fore. Are there any connections to be drawn between this ambivalence and broader debates over governmental and corporate surveillance, data privacy, and the possibility for resistance?</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>12</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-12-12t20:20:10+00:00-3ede14a588c20d1</guid>
      <title>Living in a Culture of Algorithms</title>
      <description><![CDATA[<p>danah boyd weaves together her work on youth, privacy, and data-driven technologies, to examine the complicated social and cultural dynamics underpinning social media, the messiness of “big data,” and the problematic implications of using algorithms designed for one problem to address societal issues without accounting for unintended consequences.</p>
]]></description>
      <pubDate>Wed, 25 Jan 2017 17:39:22 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>danah boyd weaves together her work on youth, privacy, and data-driven technologies, to examine the complicated social and cultural dynamics underpinning social media, the messiness of “big data,” and the problematic implications of using algorithms designed for one problem to address societal issues without accounting for unintended consequences.</p>
]]></content:encoded>
      <enclosure length="19632257" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/cdd2675f-883d-44a3-a114-6c04f126f580/ep0019_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Living in a Culture of Algorithms</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:20:25</itunes:duration>
      <itunes:summary>danah boyd weaves together her work on youth, privacy, and data-driven technologies, to examine the complicated social and cultural dynamics underpinning social media, the messiness of “big data,” and the problematic implications of using algorithms designed for one problem to address societal issues without accounting for unintended consequences.</itunes:summary>
      <itunes:subtitle>danah boyd weaves together her work on youth, privacy, and data-driven technologies, to examine the complicated social and cultural dynamics underpinning social media, the messiness of “big data,” and the problematic implications of using algorithms designed for one problem to address societal issues without accounting for unintended consequences.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>11</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-28t18:38:25+00:00-9badc7efa2e3d40</guid>
      <title>The Future of Municipal Open Data, Smart Cities, and Civic Technology</title>
      <description><![CDATA[<p>Noel Hidalgo will journey through two fellowships — his Data &amp; Society Fellowship and construction of a new fellowship for 21st century civic hackers. The first half of the discussion will focus on detailed lessons learned from working within the City’s civic technology community, collaborating with CUNY’s Service Corps students, building a municipal open data curriculum, and developing partnerships with the Mayor’s Office, Manhattan Borough President, and various City agencies.</p>
]]></description>
      <pubDate>Wed, 18 Jan 2017 20:31:17 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Noel Hidalgo will journey through two fellowships — his Data &amp; Society Fellowship and construction of a new fellowship for 21st century civic hackers. The first half of the discussion will focus on detailed lessons learned from working within the City’s civic technology community, collaborating with CUNY’s Service Corps students, building a municipal open data curriculum, and developing partnerships with the Mayor’s Office, Manhattan Borough President, and various City agencies.</p>
]]></content:encoded>
      <enclosure length="53759742" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/eab9e2b4-94c0-49f4-b677-ae259c5af447/ep0010_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>The Future of Municipal Open Data, Smart Cities, and Civic Technology</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:55:56</itunes:duration>
      <itunes:summary>Noel Hidalgo will journey through two fellowships — his Data &amp; Society Fellowship and construction of a new fellowship for 21st century civic hackers. The first half of the discussion will focus on detailed lessons learned from working within the City’s civic technology community, collaborating with CUNY’s Service Corps students, building a municipal open data curriculum, and developing partnerships with the Mayor’s Office, Manhattan Borough President, and various City agencies.</itunes:summary>
      <itunes:subtitle>Noel Hidalgo will journey through two fellowships — his Data &amp; Society Fellowship and construction of a new fellowship for 21st century civic hackers. The first half of the discussion will focus on detailed lessons learned from working within the City’s civic technology community, collaborating with CUNY’s Service Corps students, building a municipal open data curriculum, and developing partnerships with the Mayor’s Office, Manhattan Borough President, and various City agencies.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-12-12t20:10:12+00:00-0d67c6efa19da30</guid>
      <title>Security and Privacy in a Hyper-connected World</title>
      <description><![CDATA[<p>Bruce Schneider describes how we have created a world where information technology permeates our economies, social interactions, and intimate selves. The combination of mobile, cloud computing, the Internet Things, persistent computing, and autonomy is resulting in something altogether different — a world-sized web. This World-Sized Web promises great benefits, but it is also vulnerable to a host of new threats from users, criminals, corporations, and governments. These threats can now result in physical damage and even death.</p>
<p>In this talk, Schneier will take a retrospective look back at what we have learned from past attempts to secure these systems. He will also push us forward to consider seriously what technologies, laws, regulations, economic incentives, and social norms we will need to secure them in the future.</p>
]]></description>
      <pubDate>Wed, 18 Jan 2017 20:30:17 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Bruce Schneider describes how we have created a world where information technology permeates our economies, social interactions, and intimate selves. The combination of mobile, cloud computing, the Internet Things, persistent computing, and autonomy is resulting in something altogether different — a world-sized web. This World-Sized Web promises great benefits, but it is also vulnerable to a host of new threats from users, criminals, corporations, and governments. These threats can now result in physical damage and even death.</p>
<p>In this talk, Schneier will take a retrospective look back at what we have learned from past attempts to secure these systems. He will also push us forward to consider seriously what technologies, laws, regulations, economic incentives, and social norms we will need to secure them in the future.</p>
]]></content:encoded>
      <enclosure length="36507427" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/db4f0671-1db0-4244-b6a5-c0dabe687241/ep0018_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Security and Privacy in a Hyper-connected World</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:37:59</itunes:duration>
      <itunes:summary>Bruce Schneider describes how we have created a world where information technology permeates our economies, social interactions, and intimate selves. The combination of mobile, cloud computing, the Internet Things, persistent computing, and autonomy is resulting in something altogether different — a world-sized web. This World-Sized Web promises great benefits, but it is also vulnerable to a host of new threats from users, criminals, corporations, and governments. These threats can now result in physical damage and even death.

In this talk, Schneier will take a retrospective look back at what we have learned from past attempts to secure these systems. He will also push us forward to consider seriously what technologies, laws, regulations, economic incentives, and social norms we will need to secure them in the future.</itunes:summary>
      <itunes:subtitle>Bruce Schneider describes how we have created a world where information technology permeates our economies, social interactions, and intimate selves. The combination of mobile, cloud computing, the Internet Things, persistent computing, and autonomy is resulting in something altogether different — a world-sized web. This World-Sized Web promises great benefits, but it is also vulnerable to a host of new threats from users, criminals, corporations, and governments. These threats can now result in physical damage and even death.

In this talk, Schneier will take a retrospective look back at what we have learned from past attempts to secure these systems. He will also push us forward to consider seriously what technologies, laws, regulations, economic incentives, and social norms we will need to secure them in the future.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-12-05t19:14:52+00:00-9471a8cf2269d6a</guid>
      <title>Weapons of Math Destruction</title>
      <description><![CDATA[<p>Tracing her experiences as a mathematician and data scientist working in academia, finance, and advertising, Cathy O’Neil will walk us through what she has learned about the pervasive, opaque, and unaccountable mathematical models that regulate our lives, micromanage our economy, and shape our behavior. Cathy will examine how statistical models often pose as neutral mathematical tools, lending a veneer of objectivity to decisions that can severely harm people at critical life moments.</p>
<p>Cathy will also share her concerns around how these models are trained, optimized, and operated at scale in ways that she deems to be arbitrary and statistically unsound and can lead to pernicious feedback loops that reinforce and magnify inequality in our society, rather than rooting it out. She will also suggest solutions and possibilities for building mathematical models that could lead to greater fairness and less harm and suffering.</p>
]]></description>
      <pubDate>Tue, 3 Jan 2017 20:17:49 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Tracing her experiences as a mathematician and data scientist working in academia, finance, and advertising, Cathy O’Neil will walk us through what she has learned about the pervasive, opaque, and unaccountable mathematical models that regulate our lives, micromanage our economy, and shape our behavior. Cathy will examine how statistical models often pose as neutral mathematical tools, lending a veneer of objectivity to decisions that can severely harm people at critical life moments.</p>
<p>Cathy will also share her concerns around how these models are trained, optimized, and operated at scale in ways that she deems to be arbitrary and statistically unsound and can lead to pernicious feedback loops that reinforce and magnify inequality in our society, rather than rooting it out. She will also suggest solutions and possibilities for building mathematical models that could lead to greater fairness and less harm and suffering.</p>
]]></content:encoded>
      <enclosure length="27452435" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/0ae3adcf-2791-4337-8084-457a71b8bb79/ep0015_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Weapons of Math Destruction</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:28:34</itunes:duration>
      <itunes:summary>Tracing her experiences as a mathematician and data scientist working in academia, finance, and advertising, Cathy O’Neil will walk us through what she has learned about the pervasive, opaque, and unaccountable mathematical models that regulate our lives, micromanage our economy, and shape our behavior. Cathy will examine how statistical models often pose as neutral mathematical tools, lending a veneer of objectivity to decisions that can severely harm people at critical life moments.

Cathy will also share her concerns around how these models are trained, optimized, and operated at scale in ways that she deems to be arbitrary and statistically unsound and can lead to pernicious feedback loops that reinforce and magnify inequality in our society, rather than rooting it out. She will also suggest solutions and possibilities for building mathematical models that could lead to greater fairness and less harm and suffering.</itunes:summary>
      <itunes:subtitle>Tracing her experiences as a mathematician and data scientist working in academia, finance, and advertising, Cathy O’Neil will walk us through what she has learned about the pervasive, opaque, and unaccountable mathematical models that regulate our lives, micromanage our economy, and shape our behavior. Cathy will examine how statistical models often pose as neutral mathematical tools, lending a veneer of objectivity to decisions that can severely harm people at critical life moments.

Cathy will also share her concerns around how these models are trained, optimized, and operated at scale in ways that she deems to be arbitrary and statistically unsound and can lead to pernicious feedback loops that reinforce and magnify inequality in our society, rather than rooting it out. She will also suggest solutions and possibilities for building mathematical models that could lead to greater fairness and less harm and suffering.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>8</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-07t18:46:47+00:00-3a1cbae173208cb</guid>
      <title>Understanding Patterns of Mass Violence with Data and Statistics</title>
      <description><![CDATA[<p>Patrick Ball discusses how data about mass violence can seem to offer insights into patterns: is violence getting better, or worse, over time? Is violence directed more against men or women? However, in human rights data collection, we (usually) don’t know what we don’t know — and worse, what we don’t know may be systematically different from what we do know.</p>
<p>This talk will explore the assumption that nearly every project using data must make: that the data are representative of reality in the world. We will explore how, contrary to the standard assumption, statistical patterns in raw data tend to be quite different than patterns in the world. Statistical patterns in data reflect how the data was collected rather than changes in the real-world phenomena data purport to represent.</p>
<p>Using analysis of killings in Iraq, homicides committed by police in the US, killings in the conflict in Syria, and homicides in Colombia, we will contrast patterns in raw data with data in estimated total patterns of violence. The talk will show how biases in raw data can be corrected through estimation, and explain why it matters in these countries, and more generally.</p>
<p>Recorded on 3/24/2016.</p>
]]></description>
      <pubDate>Sun, 1 Jan 2017 20:13:36 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Patrick Ball discusses how data about mass violence can seem to offer insights into patterns: is violence getting better, or worse, over time? Is violence directed more against men or women? However, in human rights data collection, we (usually) don’t know what we don’t know — and worse, what we don’t know may be systematically different from what we do know.</p>
<p>This talk will explore the assumption that nearly every project using data must make: that the data are representative of reality in the world. We will explore how, contrary to the standard assumption, statistical patterns in raw data tend to be quite different than patterns in the world. Statistical patterns in data reflect how the data was collected rather than changes in the real-world phenomena data purport to represent.</p>
<p>Using analysis of killings in Iraq, homicides committed by police in the US, killings in the conflict in Syria, and homicides in Colombia, we will contrast patterns in raw data with data in estimated total patterns of violence. The talk will show how biases in raw data can be corrected through estimation, and explain why it matters in these countries, and more generally.</p>
<p>Recorded on 3/24/2016.</p>
]]></content:encoded>
      <enclosure length="42573719" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/aa4fd070-81b4-4fc5-89b8-b35adaa8797c/ep0006_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Understanding Patterns of Mass Violence with Data and Statistics</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:44:18</itunes:duration>
      <itunes:summary>Patrick Ball discusses how data about mass violence can seem to offer insights into patterns: is violence getting better, or worse, over time? Is violence directed more against men or women? However, in human rights data collection, we (usually) don’t know what we don’t know — and worse, what we don’t know may be systematically different from what we do know.

This talk will explore the assumption that nearly every project using data must make: that the data are representative of reality in the world. We will explore how, contrary to the standard assumption, statistical patterns in raw data tend to be quite different than patterns in the world. Statistical patterns in data reflect how the data was collected rather than changes in the real-world phenomena data purport to represent.

Using analysis of killings in Iraq, homicides committed by police in the US, killings in the conflict in Syria, and homicides in Colombia, we will contrast patterns in raw data with data in estimated total patterns of violence. The talk will show how biases in raw data can be corrected through estimation, and explain why it matters in these countries, and more generally.

Recorded on 3/24/2016.</itunes:summary>
      <itunes:subtitle>Patrick Ball discusses how data about mass violence can seem to offer insights into patterns: is violence getting better, or worse, over time? Is violence directed more against men or women? However, in human rights data collection, we (usually) don’t know what we don’t know — and worse, what we don’t know may be systematically different from what we do know.

This talk will explore the assumption that nearly every project using data must make: that the data are representative of reality in the world. We will explore how, contrary to the standard assumption, statistical patterns in raw data tend to be quite different than patterns in the world. Statistical patterns in data reflect how the data was collected rather than changes in the real-world phenomena data purport to represent.

Using analysis of killings in Iraq, homicides committed by police in the US, killings in the conflict in Syria, and homicides in Colombia, we will contrast patterns in raw data with data in estimated total patterns of violence. The talk will show how biases in raw data can be corrected through estimation, and explain why it matters in these countries, and more generally.

Recorded on 3/24/2016.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>7</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-12-05t18:41:02+00:00-ffda73d19f71ad6</guid>
      <title>On Digital Passageways and Borders – Mark Latonero and Paula Kift</title>
      <description><![CDATA[<p>Mark Latonero and Paula Kift on digital passageways and borders in the movement of refugees. Numerous media reports have highlighted that refugees now increasingly rely on digital devices such as smartphones in order to traverse their perilous routes, contact lost family members, or find safe places before dark. But claims that “a smartphone” may be “the most important” tool for Syrian refugees misses the bigger picture. Phones, social media, mobile apps, online maps, instant messaging, translation websites, wire money transfers, cell phone charging stations, and Wi-Fi hotspots have all created a new digital infrastructure for global movement. This infrastructure is as critical to refugees today as roads or railways. But digital infrastructures for movement can just as easily be turned into infrastructures for control by governments, corporations, and even criminals. Indeed, governments are increasingly experimenting with similar digital technologies to reinforce their border controls—to collect, process, and instrumentalize data in order to interfere with the movement of “undesirable” migrants.</p>
<p>Mark and Paula will explore these tensions and discuss how this new digital infrastructure 1) facilitates and constrains the flow of data and people, 2) conceals and constructs identity and status, and 3) affects refugees’ fundamental rights to privacy, data protection, and asylum.</p>
]]></description>
      <pubDate>Fri, 9 Dec 2016 20:14:52 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Mark Latonero and Paula Kift on digital passageways and borders in the movement of refugees. Numerous media reports have highlighted that refugees now increasingly rely on digital devices such as smartphones in order to traverse their perilous routes, contact lost family members, or find safe places before dark. But claims that “a smartphone” may be “the most important” tool for Syrian refugees misses the bigger picture. Phones, social media, mobile apps, online maps, instant messaging, translation websites, wire money transfers, cell phone charging stations, and Wi-Fi hotspots have all created a new digital infrastructure for global movement. This infrastructure is as critical to refugees today as roads or railways. But digital infrastructures for movement can just as easily be turned into infrastructures for control by governments, corporations, and even criminals. Indeed, governments are increasingly experimenting with similar digital technologies to reinforce their border controls—to collect, process, and instrumentalize data in order to interfere with the movement of “undesirable” migrants.</p>
<p>Mark and Paula will explore these tensions and discuss how this new digital infrastructure 1) facilitates and constrains the flow of data and people, 2) conceals and constructs identity and status, and 3) affects refugees’ fundamental rights to privacy, data protection, and asylum.</p>
]]></content:encoded>
      <enclosure length="23334641" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/420d5607-c733-4215-8ca8-15894808036c/ep0013_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>On Digital Passageways and Borders – Mark Latonero and Paula Kift</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:24:17</itunes:duration>
      <itunes:summary>Mark Latonero and Paula Kift on digital passageways and borders in the movement of refugees. Numerous media reports have highlighted that refugees now increasingly rely on digital devices such as smartphones in order to traverse their perilous routes, contact lost family members, or find safe places before dark. But claims that “a smartphone” may be “the most important” tool for Syrian refugees misses the bigger picture. Phones, social media, mobile apps, online maps, instant messaging, translation websites, wire money transfers, cell phone charging stations, and Wi-Fi hotspots have all created a new digital infrastructure for global movement. This infrastructure is as critical to refugees today as roads or railways. But digital infrastructures for movement can just as easily be turned into infrastructures for control by governments, corporations, and even criminals. Indeed, governments are increasingly experimenting with similar digital technologies to reinforce their border controls—to collect, process, and instrumentalize data in order to interfere with the movement of “undesirable” migrants.

Mark and Paula will explore these tensions and discuss how this new digital infrastructure 1) facilitates and constrains the flow of data and people, 2) conceals and constructs identity and status, and 3) affects refugees’ fundamental rights to privacy, data protection, and asylum.</itunes:summary>
      <itunes:subtitle>Mark Latonero and Paula Kift on digital passageways and borders in the movement of refugees. Numerous media reports have highlighted that refugees now increasingly rely on digital devices such as smartphones in order to traverse their perilous routes, contact lost family members, or find safe places before dark. But claims that “a smartphone” may be “the most important” tool for Syrian refugees misses the bigger picture. Phones, social media, mobile apps, online maps, instant messaging, translation websites, wire money transfers, cell phone charging stations, and Wi-Fi hotspots have all created a new digital infrastructure for global movement. This infrastructure is as critical to refugees today as roads or railways. But digital infrastructures for movement can just as easily be turned into infrastructures for control by governments, corporations, and even criminals. Indeed, governments are increasingly experimenting with similar digital technologies to reinforce their border controls—to collect, process, and instrumentalize data in order to interfere with the movement of “undesirable” migrants.

Mark and Paula will explore these tensions and discuss how this new digital infrastructure 1) facilitates and constrains the flow of data and people, 2) conceals and constructs identity and status, and 3) affects refugees’ fundamental rights to privacy, data protection, and asylum.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>5</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-12-05t18:54:16+00:00-ebb4d1daf450285</guid>
      <title>Ebola and the Law of Disaster Experimentation</title>
      <description><![CDATA[<p>Sean McDonald on Ebola and the Law of Disaster Experimentation. As an increasing number of industries digitize, the economy around data analysis – particularly predictive modeling – has exploded. The problem is, we don’t have any real way to understand, analyze, or predict the accuracy of these predictive models. There is no context where this has higher potential – for good and harm – than humanitarian emergencies.</p>
<p>One of the first, and worst, examples of this was the 2014 Ebola epidemic in West Africa. In its response to the escalating crisis, the humanitarian community sought out significant amounts of sensitive mobile data, epidemiological data models, and digital engagement tools, without understanding the impact it would have on the response effort. Whether that’s considered humanitarian innovation or disaster experimentation, there’s little question that it raises a significant number of legal, ethical, and practical questions.</p>
<p>This talk will focus on the intersection of the public interest, the law, and the digital approaches that are increasingly defining the way that we invest public resources and provide public services. We’ll talk about the Ebola case, the trends in public sector digitization, and what that means for the practical and legal protections of vulnerable groups.</p>
]]></description>
      <pubDate>Thu, 8 Dec 2016 20:16:31 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Sean McDonald on Ebola and the Law of Disaster Experimentation. As an increasing number of industries digitize, the economy around data analysis – particularly predictive modeling – has exploded. The problem is, we don’t have any real way to understand, analyze, or predict the accuracy of these predictive models. There is no context where this has higher potential – for good and harm – than humanitarian emergencies.</p>
<p>One of the first, and worst, examples of this was the 2014 Ebola epidemic in West Africa. In its response to the escalating crisis, the humanitarian community sought out significant amounts of sensitive mobile data, epidemiological data models, and digital engagement tools, without understanding the impact it would have on the response effort. Whether that’s considered humanitarian innovation or disaster experimentation, there’s little question that it raises a significant number of legal, ethical, and practical questions.</p>
<p>This talk will focus on the intersection of the public interest, the law, and the digital approaches that are increasingly defining the way that we invest public resources and provide public services. We’ll talk about the Ebola case, the trends in public sector digitization, and what that means for the practical and legal protections of vulnerable groups.</p>
]]></content:encoded>
      <enclosure length="39088524" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/ffd9585f-2fa3-49b4-a35b-7ca3ca48efb7/ep0014_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Ebola and the Law of Disaster Experimentation</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:40:41</itunes:duration>
      <itunes:summary>Sean McDonald on Ebola and the Law of Disaster Experimentation. As an increasing number of industries digitize, the economy around data analysis – particularly predictive modeling – has exploded. The problem is, we don’t have any real way to understand, analyze, or predict the accuracy of these predictive models. There is no context where this has higher potential – for good and harm – than humanitarian emergencies. 

One of the first, and worst, examples of this was the 2014 Ebola epidemic in West Africa. In its response to the escalating crisis, the humanitarian community sought out significant amounts of sensitive mobile data, epidemiological data models, and digital engagement tools, without understanding the impact it would have on the response effort. Whether that’s considered humanitarian innovation or disaster experimentation, there’s little question that it raises a significant number of legal, ethical, and practical questions. 

This talk will focus on the intersection of the public interest, the law, and the digital approaches that are increasingly defining the way that we invest public resources and provide public services. We’ll talk about the Ebola case, the trends in public sector digitization, and what that means for the practical and legal protections of vulnerable groups.</itunes:summary>
      <itunes:subtitle>Sean McDonald on Ebola and the Law of Disaster Experimentation. As an increasing number of industries digitize, the economy around data analysis – particularly predictive modeling – has exploded. The problem is, we don’t have any real way to understand, analyze, or predict the accuracy of these predictive models. There is no context where this has higher potential – for good and harm – than humanitarian emergencies. 

One of the first, and worst, examples of this was the 2014 Ebola epidemic in West Africa. In its response to the escalating crisis, the humanitarian community sought out significant amounts of sensitive mobile data, epidemiological data models, and digital engagement tools, without understanding the impact it would have on the response effort. Whether that’s considered humanitarian innovation or disaster experimentation, there’s little question that it raises a significant number of legal, ethical, and practical questions. 

This talk will focus on the intersection of the public interest, the law, and the digital approaches that are increasingly defining the way that we invest public resources and provide public services. We’ll talk about the Ebola case, the trends in public sector digitization, and what that means for the practical and legal protections of vulnerable groups.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>4</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-07t19:00:31+00:00-68581793d7b88cc</guid>
      <title>Balancing Privacy Obligations and Research Aims in a Learning Health Care System</title>
      <description><![CDATA[<p>Health information technology can save lives, cut costs, and expand access to care. But its full promise will only be realized if policymakers broker a “grand bargain” between providers, patients, and administrative agencies. In exchange for subsidizing systems designed to protect intellectual property and secure personally identifiable information, health regulators should have full access to key data those systems collect (once properly anonymized). Moreover, patients deserve to be able to channel certain information flows and gain some basic controls over the presentation, disclosure, and redisclosure of sensitive information. This podcast will describe and examine some legal and technical infrastructure designed to help realize these goals.</p>
]]></description>
      <pubDate>Fri, 4 Nov 2016 19:33:36 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Health information technology can save lives, cut costs, and expand access to care. But its full promise will only be realized if policymakers broker a “grand bargain” between providers, patients, and administrative agencies. In exchange for subsidizing systems designed to protect intellectual property and secure personally identifiable information, health regulators should have full access to key data those systems collect (once properly anonymized). Moreover, patients deserve to be able to channel certain information flows and gain some basic controls over the presentation, disclosure, and redisclosure of sensitive information. This podcast will describe and examine some legal and technical infrastructure designed to help realize these goals.</p>
]]></content:encoded>
      <enclosure length="23566875" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/527193d6-b864-4fd5-92f9-0cbdb5ade940/ep0008_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Balancing Privacy Obligations and Research Aims in a Learning Health Care System</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:24:31</itunes:duration>
      <itunes:summary>Health information technology can save lives, cut costs, and expand access to care. But its full promise will only be realized if policymakers broker a “grand bargain” between providers, patients, and administrative agencies. In exchange for subsidizing systems designed to protect intellectual property and secure personally identifiable information, health regulators should have full access to key data those systems collect (once properly anonymized). Moreover, patients deserve to be able to channel certain information flows and gain some basic controls over the presentation, disclosure, and redisclosure of sensitive information. This podcast will describe and examine some legal and technical infrastructure designed to help realize these goals.</itunes:summary>
      <itunes:subtitle>Health information technology can save lives, cut costs, and expand access to care. But its full promise will only be realized if policymakers broker a “grand bargain” between providers, patients, and administrative agencies. In exchange for subsidizing systems designed to protect intellectual property and secure personally identifiable information, health regulators should have full access to key data those systems collect (once properly anonymized). Moreover, patients deserve to be able to channel certain information flows and gain some basic controls over the presentation, disclosure, and redisclosure of sensitive information. This podcast will describe and examine some legal and technical infrastructure designed to help realize these goals.</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-07t18:34:10+00:00-44c1ddaa6ea80c5</guid>
      <title>Genetic Coercion</title>
      <description><![CDATA[<p>Ifeoma Ajunwa on genetic coercion. Although we cannot disclaim the utility of genetic data, it is important to consider whether we are being socially and governmentally coerced to relinquish our genetic data. If so, what does this mean for privacy and discrimination? What are the obstacles and potential solutions to securing genetic data?</p>
<p>Recorded on 6/11/2015</p>
]]></description>
      <pubDate>Tue, 1 Nov 2016 20:30:54 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Ifeoma Ajunwa on genetic coercion. Although we cannot disclaim the utility of genetic data, it is important to consider whether we are being socially and governmentally coerced to relinquish our genetic data. If so, what does this mean for privacy and discrimination? What are the obstacles and potential solutions to securing genetic data?</p>
<p>Recorded on 6/11/2015</p>
]]></content:encoded>
      <enclosure length="28869253" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/8edd315d-ff85-4b95-a416-c1795286ccf0/ep0004_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>Genetic Coercion</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:30:02</itunes:duration>
      <itunes:summary>Ifeoma Ajunwa on genetic coercion. Although we cannot disclaim the utility of genetic data, it is important to consider whether we are being socially and governmentally coerced to relinquish our genetic data. If so, what does this mean for privacy and discrimination? What are the obstacles and potential solutions to securing genetic data?

Recorded on 6/11/2015</itunes:summary>
      <itunes:subtitle>Ifeoma Ajunwa on genetic coercion. Although we cannot disclaim the utility of genetic data, it is important to consider whether we are being socially and governmentally coerced to relinquish our genetic data. If so, what does this mean for privacy and discrimination? What are the obstacles and potential solutions to securing genetic data?

Recorded on 6/11/2015</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
    </item>
    <item>
      <guid isPermaLink="false">podlove-2016-11-07t17:41:19+00:00-33e03408fb07dfd</guid>
      <title>When Algorithms Become Culture</title>
      <description><![CDATA[<p>Tarleton Gillespie on how algorithms may now be our most important knowledge technologies, “the scientific instruments of a society at large.” Algorithms are increasingly vital to how we organize human social interaction, produce authoritative knowledge, and choreograph our participation in public life. Search engines, recommendation systems, and edge algorithms on social networking sites: these not only help us find information, they provide a means to know what there is to know and to participate in social and political discourse.</p>
<p>If not as pervasive and structurally central as search and recommendation, trending has emerged as an increasingly common feature of such interfaces and seems to be growing in cultural importance. It represents a fundamentally different logic for how to algorithmically navigate social media: besides identifying and highlighting what might be relevant to “you” specifically, trending algorithms identify what is popular with “us” more broadly.</p>
<p>But while the techniques may be new, the instinct is not: what today might be identified as “trending” is the latest instantiation of the instinct to map public attention and interest, be it surveys and polling, audience metrics, market research, forecasting, and trendspotting. Understanding the calculations and motivations behind the production of these “calculated publics,” in this historical context, helps highlight how these algorithms are relevant to our collective efforts to know and be known.</p>
<p>Rather than discuss the effect of trending algorithms, I want to ask what it means that they have become a meaningful element of public culture. Algorithms, particularly those involved in the movement of culture, are both mechanisms of distribution and valuation, part of the process by which knowledge institutions circulate and evaluate information, the process by which new media industries provide and sort culture. This essay examines the way these algorithmic techniques themselves become cultural objects, get taken up in our thinking about culture and the public to which it is addressed, and get contested both for what they do and what they reveal. We should ask not just how algorithms shape culture, but how they become culture.</p>
<p>Recorded on 2/25/2016</p>
]]></description>
      <pubDate>Fri, 7 Oct 2016 17:43:06 +0000</pubDate>
      <author>events@datasociety.net (Data &amp; Society)</author>
      <link>https://listen.datasociety.net</link>
      <content:encoded><![CDATA[<p>Tarleton Gillespie on how algorithms may now be our most important knowledge technologies, “the scientific instruments of a society at large.” Algorithms are increasingly vital to how we organize human social interaction, produce authoritative knowledge, and choreograph our participation in public life. Search engines, recommendation systems, and edge algorithms on social networking sites: these not only help us find information, they provide a means to know what there is to know and to participate in social and political discourse.</p>
<p>If not as pervasive and structurally central as search and recommendation, trending has emerged as an increasingly common feature of such interfaces and seems to be growing in cultural importance. It represents a fundamentally different logic for how to algorithmically navigate social media: besides identifying and highlighting what might be relevant to “you” specifically, trending algorithms identify what is popular with “us” more broadly.</p>
<p>But while the techniques may be new, the instinct is not: what today might be identified as “trending” is the latest instantiation of the instinct to map public attention and interest, be it surveys and polling, audience metrics, market research, forecasting, and trendspotting. Understanding the calculations and motivations behind the production of these “calculated publics,” in this historical context, helps highlight how these algorithms are relevant to our collective efforts to know and be known.</p>
<p>Rather than discuss the effect of trending algorithms, I want to ask what it means that they have become a meaningful element of public culture. Algorithms, particularly those involved in the movement of culture, are both mechanisms of distribution and valuation, part of the process by which knowledge institutions circulate and evaluate information, the process by which new media industries provide and sort culture. This essay examines the way these algorithmic techniques themselves become cultural objects, get taken up in our thinking about culture and the public to which it is addressed, and get contested both for what they do and what they reveal. We should ask not just how algorithms shape culture, but how they become culture.</p>
<p>Recorded on 2/25/2016</p>
]]></content:encoded>
      <enclosure length="36458224" type="audio/mpeg" url="https://cdn.simplecast.com/audio/3deecd/3deecd15-b5fd-4134-9316-27de084c9d3e/c9514a64-7a5a-40a5-89f4-ad9d5ff9b24f/ep0003_tc.mp3?aid=rss_feed&amp;feed=eL5Oo9jN"/>
      <itunes:title>When Algorithms Become Culture</itunes:title>
      <itunes:author>Data &amp; Society</itunes:author>
      <itunes:duration>00:37:56</itunes:duration>
      <itunes:summary>Tarleton Gillespie</itunes:summary>
      <itunes:subtitle>Tarleton Gillespie</itunes:subtitle>
      <itunes:explicit>false</itunes:explicit>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
    </item>
  </channel>
</rss>