DPhil Poster Presentation (Michaelmas 2024)
Date:
Below are the references and resources used for my DPhil poster presentation at the Oxford Internet Institute.
References
Why care about AI agents?
Chan, A., Salganik, R., Markelius, A., Pang, C., Rajkumar, N., Krasheninnikov, D., Langosco, L., He, Z., Duan, Y., Carroll, M., Lin, M., Mayhew, A., Collins, K., Molamohammadi, M., Burden, J., Zhao, W., Rismani, S., Voudouris, K., Bhatt, U., … Maharaj, T. (2023). Harms from increasingly agentic algorithmic systems. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 651–666. https://doi.org/10.1145/3593013.3594033
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2024). GPTs are GPTs: Labor market impact potential of LLMs. Science, 384(6702), 1306–1308. https://doi.org/10.1126/science.adj0998
Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., Zhang, J., Chen, Z., Tang, J., Chen, X., Lin, Y., Zhao, W. X., Wei, Z., & Wen, J. (2024). A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6), 186345. https://doi.org/10.1007/s11704-024-40231-1 f
Why the Danish public sector?
Ilsøe, A., Larsen, T. P., Mathieu, C., & Rolandsson, B. (2024). Negotiating about Algorithms: Social Partner Responses to AI in Denmark and Sweden. ILR Review, 77(5), 856–868. https://doi.org/10.1177/00197939241278956f
Jørgensen, R. F. (2023). Data and rights in the digital welfare state: The case of Denmark. Information, Communication & Society, 26(1), 123–138. https://doi.org/10.1080/1369118X.2021.1934069
What are current limitations
Kapoor, S., Stroebl, B., Siegel, Z. S., Nadgir, N., & Narayanan, A. (2024). AI agents that matter (No. arXiv:2407.01502). arXiv. https://doi.org/10.48550/arXiv.2407.01502
Selbst, A. D., boyd, danah, Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598
Weidinger, L., Rauh, M., Marchal, N., Manzini, A., Hendricks, L. A., Mateos-Garcia, J., Bergman, S., Kay, J., Griffin, C., Bariach, B., Gabriel, I., Rieser, V., & Isaac, W. (2023). Sociotechnical Safety Evaluation of Generative AI Systems (No. arXiv:2310.11986). arXiv. http://arxiv.org/abs/2310.11986
Paper 1: Political bias
Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29, 4349–4357.
Gordon, J., Babaeianjelodar, M., & Matthews, J. (2020). Studying Political Bias via Word Embeddings. Companion Proceedings of the Web Conference 2020, 760–764. https://doi.org/10.1145/3366424.3383560
Rystrøm, J. (2024). Jhrystrom/wicked-fair [Computer software]. https://github.com/jhrystrom/wicked-fair (Original work published 2024)
Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., Evans, G., Campbell-Gillingham, L., Collins, T., Parkes, D. C., Botvinick, M., & Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852. https://doi.org/10.1126/science.adq2852
Paper 2: Fairness in tool use
Ye, J., Li, S., Li, G., Huang, C., Gao, S., Wu, Y., Zhang, Q., Gui, T., & Huang, X. (2024). ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages. In L.-W. Ku, A. Martins, & V. Srikumar (Eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 2181–2211). Association for Computational Linguistics. https://doi.org/10.18653/v1/2024.acl-long.119
Paper 3: Fair agent framework
Mökander, J., Schuett, J., Kirk, H. R., & Floridi, L. (2024). Auditing large language models: A three-layered approach. AI and Ethics, 4(4), 1085–1115. https://doi.org/10.1007/s43681-023-00289-2
Resources
The poster used beautiful icons from the Noun Project. Credits are below 👇
- Public Service by putri amaliya from Noun Project (CC BY 3.0)
- robot software by Dava Arya Ditya from Noun Project (CC BY 3.0)
- Tool by Suminah Wulandari from Noun Project (CC BY 3.0)
- bias by Nithinan Tatah from Noun Project (CC BY 3.0)
- Miscommunication by Prashanth Rapolu from Noun Project (CC BY 3.0)
- government digital service by Eucalyp from Noun Project (CC BY 3.0)
- Culture by Chaiwat Kinkaew from Noun Project (CC BY 3.0)
