I am Dhaval Adjodah, a machine learning and policy researcher working on pushing the limits of modern machine learning while maximizing their social good. Currently, I am a research scientist at the MIT Quest for Intelligence whose purpose is to make advances towards building machine intelligence. Previously, I was a research scientist at the MIT Media Lab where I developed new machine learning algorithms using insights from social and cognitive science. I am also a consultant for the World Bank, where I am helping build new machine learning pipelines to track and implement SDG policy.

In addition to my day-to-day work (software engineering, running live online human experiments and writing papers), I enjoy being of service to the computer science and social science communities, most recently as a member of the organizing committee of the AI for Social Good conference workshop series, and on the program committees of the Black in AI initiative and Theoretical Foundations of Reinforcement Learning workshop.

My PhD thesis was in computational social science and reinforcement learning. I also was a member of the Harvard Berkman Assembly on Ethics and Governance in Artificial Intelligence, and was a fellow at the Dalai Lama Center For Ethics And Transformative Values. Previously, I worked as a data scientist in banking and insurance, consulted with the Veterans Health Administration, and founded two startup incubators. I hold a masters degree from the Technology and Policy Program from the (now) Institute for Data, Systems and Society, and a bachelors in Physics (with a focus in nonlinear dynamics), also from MIT.

My email is contact dot dval dot me at gmail dot com. I'm also on Twitter. Github here.

Highlighted Projects:
Leveraging Communication Topologies between Learning Agents in Deep Reinforcement Learning

There has been a lot of recent work showing that sparsity in neural network structure can lead to huge improvements, such as through the Lottery Ticket Hypothesis. Coming from a computational social science background, we know that humans self-organize into sparse social networks. My hypothesis was that organizing the communication topology (social network) between agents might lead to improvements in learning performance. This is especially important because some machine learning paradigms—especially reinforcement learning—are becoming more and more distributed in order to parallelize learning, similar to how human society balances exploration and exploitation. Well, we find huge improvements: 10–798% improvement on the state-of-the-art robotics simulators! Accepted at AAMAS 2020. [Code, Paper]

Bayesian Models of Cognition in the Wisdom of the Crowd

How do we design and deploy crowdsourced prediction platforms for real-world applications where risk is an important dimension of prediction performance? To help answer this question, we conducted a large online Wisdom of the Crowd study where participants predicted the prices of real financial assets (e.g. S&P 500). We observe a Pareto frontier between accuracy of prediction and risk, and find that this trade-off is mediated by social learning i.e. as social learning is increasingly leveraged, it leads to lower accuracy but also lower risk. We also observe that social learning leads to superior accuracy during one of our rounds that occurred during the high market uncertainty of the Brexit vote. Our results have implications for the design of crowdsourced prediction platforms: for example, they suggest that the performance of the crowd should be more comprehensively characterized by using both accuracy and risk (as is standard in financial and statistical forecasting), in contrast to prior work where risk of prediction has been overlooked. [Working paper]

-->
Selected Publications: