<div dir="ltr"><span style="color:rgb(0,0,0);font-family:Arial,sans-serif;font-size:11pt;white-space:pre-wrap">Desde el Programa de Becas AISAR en AI Safety tenemos el placer de invitarlos a la próxima charla de nuestro seminario online, con la participación de investigadores del área.</span><br><div class="gmail_quote gmail_quote_container"><div dir="ltr"><span id="m_-2848401199917901999gmail-docs-internal-guid-865437fe-7fff-43c0-6a9d-e2491c6cad5a" style="color:rgb(0,0,0)"><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">📌 </span><span style="font-size:11pt;font-family:Arial,sans-serif;font-weight:700;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Fecha y hora:</span><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> Lunes 13 de octubre, 10:00 hs (ARG).</span><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">🎤 </span><span style="font-size:11pt;font-family:Arial,sans-serif;font-weight:700;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Orador:</span><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> Joar Skalse – PhD @ University of Oxford | Director @ DEDUCTO</span><span style="font-size:11pt;font-family:Arial,sans-serif;font-style:italic;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">📖 </span><span style="font-size:11pt;font-family:Arial,sans-serif;font-weight:700;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Título:</span><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> </span><span style="font-size:11pt;font-family:Arial,sans-serif;font-style:italic;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">The Theoretical Foundations of Reward Learning</span></p><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style="color:rgb(34,34,34);font-size:11pt;font-family:Arial,sans-serif;font-weight:700;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Charla online:</span><span style="color:rgb(34,34,34);font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"> Para asistir a la charla, registrate acá: </span><a href="https://luma.com/nostkcsb" style="text-decoration:none" target="_blank"><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">https://luma.com/nostkcsb</span></a></p></span><p></p><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;font-family:Arial,sans-serif;font-weight:700;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Abstract: </span><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">In this talk, I will provide an overview of my research on how to build a theoretical foundation for the field of reward learning, including my main motivations for pursuing this research, and some of my core results.</span></p><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">This research agenda involves answering questions such as: What is the right method for expressing goals and instructions to AI systems? How similar must two different goal specifications be in order to not be hackable? What is the right way to quantify the differences and similarities between different goal specifications in a given specification language? What happens if you execute a task specification that is not close to the “ideal” specification? Which specification learning algorithms are guaranteed to converge to a good specification? How sensitive are these specification learning algorithms to misspecification? If we have a bound on the error in a specification (under some metric), can we devise safe methods for optimising it?</span></p><p dir="ltr" style="line-height:1.38;margin-top:12pt;margin-bottom:12pt"><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Encontrá más detalles en: </span><a href="https://www.lesswrong.com/s/TEybbkyHpMEB2HTv3" style="text-decoration:none" target="_blank"><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">https://www.lesswrong.com/s/TEybbkyHpMEB2HTv3</span></a></p><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Equipo AISAR</span><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"><br></span><a href="http://scholarship.aisafety.ar/?utm_source=chatgpt.com" style="text-decoration:none" target="_blank"><span style="font-size:11pt;font-family:Arial,sans-serif;font-variant-ligatures:normal;font-variant-alternates:normal;font-variant-numeric:normal;font-variant-east-asian:normal;text-decoration:underline;vertical-align:baseline;white-space:pre-wrap">http://scholarship.aisafety.ar/</span></a><br></div>
</div></div>