One of the joys I experienced when my kids were young was to learn together. Lots of people participated in their education – my amazing wife organized the whole thing and taught language arts, my dad covered computer science, math, and drawing, and the kids participated in groups, clubs, and classes (foreign languages, music, Judo, and a bunch of other stuff). I did the science and philosophy with each of them. My goal was not for them to learn history of philosophy or who said what when; it was more about critical thinking and how to ask interesting questions and how to try to make progress towards better questions and occasionally even some provisional answers. How to be comfortable with uncertainty, how to soak up the awe of big ideas, and how to feel the joy of entering a new cloud of thoughts one hadn’t seen before.
I will eventually post some other Units that I did, and possibly my wife will put up the whole curriculum. Here, I describe one particular Unit, which we covered in about a semester: how to think about minds and cognition (from my perspective: Diverse Intelligence). Maybe someone will find it useful and adapt it to their own needs – there’s a lot here that can be done in different ways and with different age levels (I myself used this project to hone and troubleshoot some activities I will be deploying with my own students at Tufts).
First, we had a book. The book wasn’t strictly guiding the activities (which I made up), but one thing we did was read a chapter a week and discuss its content. Our book was the remarkable Picturing the Mind by Simona Ginsberg and Eva Jablonka.

Here are some of the specific activities we did:
Read about and discuss the Physarum mass choice assay from this paper. It looks like this – the slime mold is able to detect the bigger mass (3 glass disks instead of 1, actually it’s sensing the strain angle in the agar it’s growing on), and reliably go to it (note the “pondering time” up to about 800 minutes, before it actually acts):
We talked about what it meant, how to think about decision-making in this novel embodiment, and what the mechanisms might be (i.e., how would you build one yourself, let’s say in a robotics implementation). We then actually did the experiments, using house-hold materials (so the basement was temporarily taken over by slime mold). It was a good and cheap/safe introduction to biological experimental design and data analysis, and we discussed very simple statistics (in this case, chi squared test) applied to real-world data.
We then talked about a new experiment that could be done with this system, and one kid decided to test the decision-making in a weird scenario that this model system facilitates: when the number of agents is not constant (i.e., when the agent’s behavioral choice actually ends up splitting or combining itself, a kind of minimal Strange Loop that goes meta to the typical formalism of a fixed agent having behaviors). His lab report for it became this preprint (which was then the hook for discussions of how to write an academic paper and the whole process related to publications).
Along the way, we covered skills like:
- how to find papers in the scientific literature (on-line search tools, PubMed, Google Scholar, etc.)
- how to read scientific papers
- how to cite sources using a reference manager (I like Endnote)
- how to plan, write, and edit a paper
- how to make and use mindmaps
- how to apply for a small grant (a few hundred $ which we used to pay survey takers at AWS)
- how to use AWS Mechanical Turk (which was remarkably difficult, by the way) to get additional datapoints for the survey
Over the weeks, we specifically addressed issues like
- philosophical and social issues of the Diverse Intelligence field
- implications for personal and interpersonal ethics
- relationships of this field to other disciplines, from computer science to ecology
One of the capstone projects (a choice from among several options) was to develop a survey about attitudes on open questions about philosophy of mind and the role of AI’s in society and deploy it (the first guinea pigs were largely people who follow me on Twitter). This meant,
- identify interesting questions
- make hypotheses about how answers on various questions should (or shouldn’t) correlate
- learn to use the Qualtrics platform for designing surveys
- collect and analyze data using Excel and other tools
- make a PowerPoint presentation about the project and results
- learn to give presentations to others (starting with individual practice, then to the family, then to a class of peer kids, then to an actual laboratory of scientists). The latter was via Zoom.
The survey had an interesting twist, in that it split the participants into several groups each of whom saw a different priming text and then were asked to answer the same controversial question. Some of the priming texts were written by the student, and some were generated via GPT-4, which let the student learn how to interact with the LLM and analyze data to see whether his reasoning or the LLM’s was more effective in swaying the public to a specific outcome. The data will be published in a preprint in a few months (and there are some quite interesting patterns there; we got >300 participants).
The presentation slides he made are here:
and the video presentation is here:
You can take the survey itself here: https://tufts.qualtrics.com/jfe/form/SV_ba5pRoLqZQUiiyy

All in all it was a lot of fun, including the many barriers we faced along the way (which is a big part of the goal of the learning).
A different exercise I like, which I also deploy with my undergraduate classes at Tufts, is called “How do you know”. Everyone writes down something on a piece of paper that they (think they) know nowadays – things like “humans have an unconscious mind”, “electricity and magnetism are two forms of the same underlying phenomenon”, “our world has 3+1 dimensions”, “DNA carries genetic material”, “mutations are random with respect to their effect on fitness”, etc. Then all the students swap papers, and each person tries to reconstruct how we know this particular thing – what was the evidence for it and what is the most convincing case that can be made for it. Then the original person tries to poke holes in that reasoning. It makes for a good discussion, and one outcome is that people realize that much of the stuff we “know”, we generally have no idea how or why we think it’s true.

Leave a Reply to Mike Levin Cancel reply