In the late 1600s, William Molyneaux posed a question to a friend:
Suppose a man born blind, and now adult, and taught by his touch to distinguish between a cube and a sphere of the same metal, and nighly of the same bigness, so as to tell, when he felt one and the other, which is the cube, which is the sphere. Suppose then the cube and the sphere placed on a table, and the blind man made to see: query, Whether by his sight, before he touched them, he could now distinguish and tell which is the globe, which the cube?
This question is hard to investigate empirically. Hard questions are thrown to the philosophers. For this particular question, the philosopher must consider whether knowledge acquired through one sense can be reactivated through another.
I’m working on a project that has us “sensing” shapes in a different way. In Madeup, we fabricate shapes with code. We sense their parametric qualities and their constraints in an active way. Molyneaux’s question makes me wonder: if we have only programmed shapes and considered them as artifacts produced by computation, would we recognize them if we saw them uncomputationally, with our eyes?
While I find this question intriguing, the truth is that many of us can both see and feel, and much of our experiential knowledge is a combination of sensory inputs. Our eyes see glowing red metal, and we dare not touch it because of what happened last time. We reach into our pockets and feel a metal disc, and our visual cortex assembles an image of a quarter even before it’s visible.
Is fabricating shapes a third way of sensing them? If so, what kind of new knowledge do we gain by having this additional sense? How does this third sense associate with and trigger the other two?