ABOUT US

Artificial Minds, Human Values is a personal research and public-writing project led by Ruize Xia.

The project was created to study a problem that is increasingly hard to ignore: artificial intelligence is becoming more capable, more visible, and more influential in everyday life, yet the language we use to judge it often remains too narrow. We talk about speed, scale, accuracy, and benchmarks. We talk less about dignity, legibility, fairness, and what good judgment should look like when machines participate in human decisions.

This site responds to that gap through an interdisciplinary lens. It combines technical curiosity with ethical analysis, drawing from machine learning, philosophy, education, accessibility, and civic life. The aim is not to separate engineering from human concerns, but to put them back into the same sentence.

Ruize’s work asks questions such as:

  • What should responsible AI look like in classrooms, public institutions, and community settings?
  • How do we evaluate systems not only by output quality, but also by whether people can trust, question, and understand them?
  • What does it mean to keep a human being visible when automation encourages abstraction?

The work on this site takes three forms.

First, research. Concept notes and working papers explore value-aligned evaluation, accessibility, and the role of human judgment in socio-technical systems.

Second, writing. Essays connect AI to everyday moral and civic questions: explanation, accountability, labor, inclusion, and the responsibilities of designers.

Third, service. Workshops, reading groups, and public-interest experiments translate ideas into practical work that can help learners and communities engage AI critically rather than passively.

The deeper goal is to build a style of inquiry that is technically serious and morally serious at the same time. Not alarmism. Not hype. Just clearer thinking, better design, and stronger human values.