STSV Lab Publications Hero

Publications

2026

The Marquand House Collective (Aidinoff, M., Armstrong, L., Bhandari, E., Biddle, E.R., Eslami, M., Karahalios, K., Matias, N., Metaxa, D., Nelson, A., Sandvig, C., and Vaccaro, K). Auditing AI. Cambridge: MIT Press, forthcoming 2026.
 

2025

Longpre, S., Klyman, K., Appel, R.A., Kapoor, S., Bommasani, R., Sahar, M., McGregor, S., Ghosh, S., Blili-Hamelin, B., Butters, N., Nelson, A., Elazari, A., Sellars, A., Ellis, C.J., Sherrets, D., Song, D., Geiger, H., Cohen, I., McIlvenny, L., Srikumar, M., Jaycox, M., Anderljung, M., Johnson, N.F., Carlini, N., Miailhe, N., Marda, N., Henderson, P., Portnoff, R., Weiss, R., Westerhoff, R., Jernite, Y., Chowdhury, R., Liang, P., Narayanan, A. (March 2025). In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI

Nelson, A. (2025, February 14). Three Fallacies: Remarks at the Elysée Palace on the Occasion of the AI Action Summit. Tech Policy Press.

Bengio, Y., Mindermann, S., Privitera, D., Besiroglu, T., Bommasani, R., Casper, S., Choi, Y., Fox, P., Garfinkel, B., Goldfarb, D., Heidari, H., Ho, A., Kapoor, S., Khalatbari, L., Longpre, S., Manning, S., Mavroudis, V., Mazeika, M., Michael, J., Newman, J., Ng, K. Y., Okolo, C. T., Raji, D., Sastry, G., Seger, E., Skeadas, T., South, T., Strubell, E., Tramèr, F., Velasco, L., Wheeler, N., Acemoglu, D., Adekanmbi, O., Dalrymple, D., Dietterich, T. G., Felten, E. W., Fung, P., Gourinchas, P.-O., Heintz, F., Hinton, G., Jennings, N., Krause, A., Leavy, S., Liang, P., Ludermir, T., Marda, V., Margetts, H., McDermid, J., Munga, J., Narayanan, A., Nelson, A., Neppel, C., Oh, A., Ramchurn, G., Russell, S., Schaake, M., Schölkopf, B., Song, D., Soto, A., Tiedrich, L., Varoquaux, G., Yao, A., Zhang, Y.-Q., Albalawi, F., Alserkal, M., Ajala, O., Avrin, G., Busch, C., de Carvalho, A. C. P. L. F., Fox, B., Gill, A. S., Hatip, A. H., Heikkilä, J., Jolly, G., Katzir, Z., Kitano, H., Krüger, A., Johnson, C., Khan, S. M., Lee, K. M., Ligot, D. V., Molchanovskyi, O., Monti, A., Mwamanzi, N., Nemer, M., Oliver, N., López Portillo, J. R., Ravindran, B., Rivera, R. P., Riza, H., Rugege, C., Seoighe, C., Sheehan, J., Sheikh, H., Wong, D., & Zeng, Y. (2025, January). International AI Safety Report: The International Scientific Report on the Safety of Advanced AI.

2024

Faveri, B., Johnson-León, M., Sylvester, P., Chun, W. H. K., Hannák, A., Mendoza, M., Broussard, M., Enikolopov, R., Nelson, A., Sandvig, C., Sesan, G., Sporle, A., Srihari, R., Srinivasan, J., Wilson, C., & Zou, M. (International Panel on the Information Environment.) (2024). Towards a global AI auditing framework: Assessment and recommendations

The AI Democracy Projects. (2024, October 30). AI models falter answering election questions in Spanish.

United Nations High-Level Advisory Body on Artificial Intelligence. (2024, September). Governing AI for humanity. Office of the UN Secretary General.

Bengio, Y., Nelson, A., Prud'homme, B., Ravindran, B., Zeng, Y., Artigas, C., Balicer, R., Gluckman, P., Harakka, T., Tallinn, J., Wang, J., Cuellar, M.-F., Seghrouchni, A. E. F., Tse, B., Ha, J.-W., Ndiaye, S. M., Hendrycks, D., Chowdhury, R., Manyika, J., & Rossi, F. (2024, September). Manhattan Declaration on Inclusive Global Scientific Understanding of Artificial Intelligence.

Bommasani, R., Arora, S., Choi, Y., Ho, D. E., Jurafsky, D., Koyejo, S., Lakkaraju, H., Li, F.-F., Narayanan, A., Nelson, A., Pierson, E., Pineau, J., Varoquaux, G., Venkatasubramanian, S., Stoica, I., Liang, P., & Song, D. (2024, September). A path for science‑ and evidence‑based AI policy.

Dellatolass, I., Nelson, A. (2024, August). Scientific expertise and public engagement in science policy: A conversation with Dr. Alondra Nelson. MIT Science Policy Review.

Kapoor, S., Bommasani, R., Klyman, K., Longpre, S., Ramaswami, A., Cihon, P., Hopkins, A., Bankston, K., Biderman, S., Bogen, M., Chowdhury, R., Engler, A., Henderson, P., Jernite, Y., Lazar, S., Maffulli, S., Nelson, A., Pineau, J., Skowron, A., ... Narayanan, A. (2024, July). On the societal impact of open foundation models. In Proceedings of the 41st International Conference on Machine Learning (pp. 23082-23104). PMLR 235.

Nelson, A., & Fields-Meyer, A. (2024, July 22). The AI dangers of a second Trump presidency. Tech Policy Press.

Bengio, Y., Mindermann, S., Privitera, D., Besiroglu, T., Bommasani, R., Casper, S., Choi, Y., Goldfarb, D., Heidari, H., Khalatbari, L., Longpre, S., Mavroudis, V., Mazeika, M., Ng, K. Y., Okolo, C. T., Raji, D., Skeadas, T., Tramèr, F., Adekanmbi, B., Christiano, P., Dalrymple, D., Dietterich, T. G., Felten, E., Fung, P., Gourinchas, P.-O., Jennings, N., Krause, A., Liang, P., Ludermir, T., Marda, V., Margetts, H., McDermid, J. A., Narayanan, A., Nelson, A., Oh, A., Ramchurn, G., Russell, S., Schaake, M., Song, D., Soto, A., Tiedrich, L., Varoquaux, G., Yao, A., & Zhang, Y. (2024, May). International scientific report on the safety of advanced AI (Interim Report).

AI Policy and Governance Working Group. (2024, March). Recommendations to the US Department of Commerce (NTIA) on open foundation AI models. Institute for Advanced Study.

Cover page Seeking Reliable Election Information

 

Angwin, J., Nelson, A., & Palta, R. (2024, February 27). Seeking reliable election information? Don't trust AI. The AI Democracy Projects.


Palta, R., Angwin, J., & Nelson, A. (2024, February 27). How we tested leading AI models' performance on election queries. The AI Democracy Projects.

Nelson, A. (2024, January 12). The right way to regulate AI. Foreign Affairs.

2023 

van Wichelen, S., Rohde, J., Graizbord, D., Sims, C., Barkan, J., Nelson, A., & Thompson, C. (2023, November). Introduction, Science and the state. A special issue of Public Culture.

Nelson, A. (2023, October 24). Statement to the United States Senate AI Insight Forum on Innovation.

AI Policy and Governance Working Group. (2023, September 30). Recommendations on global AI governance to the United Nations Secretary-General's Envoy on Technology. Institute for Advanced Study.

Lazar, S., & Nelson, A. (2023, July 13). AI safety on whose terms? Science.

AI Policy and Governance Working Group. (2023, June 12). Comment of the AI Policy and Governance Working Group on the NTIA AI accountability policy request for comment. Institute for Advanced Study.

Nelson, A. (2023, April 11). AI is having a moment—and policymakers cannot squander the opportunity to act. Center for American Progress.