Monday, November 15, 2010

What is manufacturing?

In a previous post, I raised the question of the difference between manufacturing and growth, and we've been kicking that question around a bit in comments. As a first step toward answering that question—and in response to one of Neal's questions in comments—here's a quick post on what manufacturing is, and how it works.

Consider again the desk that I was using as an example in my previous post. Having already learned what it's made of, we now want to know how it was made.

The building of a desk begins with a design, a plan in the builder’s head or on paper that specifies what the final product will look like. Because the desk will be made from many parts, the plan must indicate the shape and dimensions of each part, and how these parts will fit together. Next, the builder must decide what materials each part will be made of. Having chosen oak, the desk-maker obtains some large pieces of wood cut from the trunk of an oak tree. She then cuts and shapes this wood into the various forms of the parts laid out in her plan. Once these parts have been shaped to the plan’s specifications, they can be assembled and fastened together to form the final product.

We can divide the making of the desk into three stages:
1. Formulation of the plan or design.
2. Shaping of parts from raw materials.
3. Assembly of these parts into the final product.

Some questions for discussion:
What kind of a being does this manufacturing process produce? Or to put this another way, what is the desk's way of being?

How is the desk as a whole related to the parts from which it was made? What kind of whole is the desk, and what sort of parts does it have?

Wednesday, November 10, 2010

How do we analyze life? [by Neal]

[This is a post by Neal, which I'm posting for him due to some technical difficulties.]

Noah raises some great issues, and I’d like to offer, not an answer or solution, but perhaps a framework that I think might be helpful in formulating a solution. What I’d like to suggest is that we should adopt some kind of ‘modal ontology’.

I’m taking this from a Dutch philosopher name Herman Dooyeweerd. There are some issues with the details, but I find the overall idea compelling. Basically, the idea is that ontology is not an all-or-nothing affair, but rather that all of experience is made up of different ontological levels or ‘modes’, which are united in experience but can be distinguished in reflection. Each mode has its own ‘individuating factor’ or characteristic form or focus. These modes are not only distinguishable, but are particularly ordered in that some are foundational for others. So, for example, the spatial mode or aspect is foundational for the biotic aspect, in that there can be no study of life (the individuating factor of the biotic sphere) without it having some ‘analogy’ or relation to continuous extension (the individuating factor of the spatial sphere). The biotic aspect, in turn, is foundational for, say, the sensitive/psychic aspect (individuated by feeling, broadly speaking), etc.

More foundational modes ‘anticipate’ later modes (in being foundational for them), and founded modes ‘retrocipate’ or refer back to earlier modes (in being founded on them), and so the different modes, though distinguishable in reflection, are presented in experience as a coherent whole. The key to any such ‘modal’ analysis is to not let human experience be reduced to any one particular mode, or give any one particular mode undue pride of place over the others. All are necessary to human existence, and all are present (as a coherent whole) in every human experience.

The reason I think this is helpful to the problem Noah discusses is because it helps us discuss our own human experience as both a living being but also a manufactured one (or at least, as something made up of parts; Dooyeweerd would think we were manufactured, via the process of evolution and growth, but I think we can leave that out of the discussion for now). That is to say, qua living things, we are perhaps not made up of anything at all; but qua physical beings, we are made up of things (not to mention the ways in which we are ‘made up’ of social, economic, and symbolic forces, to name a few other modes).

This, by the way, hints at another major issue: in what sense are we, for Merleau-Ponty but also in general, the products of human action [via, say, cultural sedimentation, etc. as Noah was talking about at SPEP], and in what sense are we the centers of human action [as, say, subjects or actors]? Can we distinguish these two points rigorously in human living? Do we need to?). This is to say, while I agree with the distinction you are making between living bodies and manufactured entities, I also want to claim that even living bodies can be understood also as manufactured (provided this means ‘made up of things’; if this term also implies a purposeful intention enacting the manufacturing, then perhaps we aren’t—depending, I suppose, on how you answer the question I raised in the last parenthesis).

I think such a ‘modal’ analysis could be of service to the issue here in two ways: first, by enabling us to honor the real difference between growth/living and manufacturing/inanimate, without losing track of the fact that something can be living and still have many things in common with the manufactured/inanimate; second, and probably more interestingly, to open up the discussion of where ‘life’ should be situated or how it should be understood. If ‘life’ is here synonymous with human existence, then it is not a particular mode, but can be analyzed by each of the modes in distinct ways. According to Dooyeweerd, as I have said, ‘life’ is characteristic of the biotic mode—and this makes it more foundational than higher-order. That is, here, ‘life’ is only understood biotically, and I suspect that Noah (and probably Merleau-Ponty) have something else in mind here in talking about life.

The problem of life (and what it means) is a huge issue in Husserl, and is what spurred Derrida’s analysis of difference as well as much of Michel Henry’s “material phenomenology,” but MP’s focus on animality suggests that life cannot mean the same thing for him as it does for Husserl or Henry. So, shall we talk about ‘life’: what do we mean when we speak of ‘life’ or ‘living’ bodies? And where would ‘life’ rate on a list of modes of human existence, from most foundational to most founded?

P.S. If you’re interested, here’s a link to a more thorough elaboration of Dooyeweerd’s theory of modal aspects, including all 15 aspects and their order:

Monday, November 8, 2010

What are living bodies made of?

It seems natural, in thinking about the living body, to ask what living bodies are made of. And the answer might seem obvious: living bodies are made of organs and tissues, which are composed of living cells; these cells are built out of of proteins and other organic molecules, which are in turn made of elements like carbon, hydrogen, and oxygen. But this isn't the answer I'm going to offer here. Instead, I want to suggest that living bodies are not made of anything at all.

To understand what I mean, we have to turn our attention back on the question itself. What does it mean to ask what something is made of? What does this question assume about the thing in question, and what sort of answer are we looking for?

A roll-top desk not unlike my own.

Consider a simple example: an old roll-top desk that I’ve had since I was a child.

What is this desk made of? If we could pose this question to the artisan who made it, she'd probably tell us she'd built the desk out of wood and some metal fasteners. In other words, she would tell us about the materials from which it was constructed.

When we ask what an artificial thing is made of, this is the kind of answer we expect. We'd be pretty surprised if she replied that the desk was made from dead plant cells, or from atomic elements like carbon and iron. When we ask what a desk is made of, we want to know what materials we would need if we were going to build a desk ourselves. The artisan didn't assemble the desk out of elements or cells, but out of wood, nails and screws.

When we learn what these materials were, our question is answered. It would be strange if, having learned that the desk was made of oak, we continued pestering the craftsperson to tell us what the oak was made of. She would probably reply that she hadn’t made the wood—it was cut from an oak tree. Similarly, the steel fasteners were forged out of iron ore, which wasn't made but rather mined from within the Earth.

My desk, like all manufactured things, is made of materials that were not themselves manufactured. Human manufacturing depends on “raw” materials like wood and ore, which are natural formations rather than artificial products. When we ask what an artificial thing is made of, we are ultimately asking after these “raw” materials, from which every manufacturing process begins.

But what about these materials themselves—what are they made of? Here again we need to consider just what we're asking when we pose this question, and what sort of answer we're looking for. When we asked what the desk was made of, we were asking about the materials that its maker used in constructing it. The desk was made from the wood of the oak tree. But it makes no sense to ask what materials the oak tree’s maker used in constructing it, since we know that the oak tree had no maker. No one made the oak tree out of anything, because the oak tree wasn't made at all. A tree can be cultivated, but it can't be constructed or manufactured. Like all living bodies, the tree is not built, but grown.

This leads us to a very important question: what is the difference between these two ways of coming to be, manufacturing and growth? But I'll have to save this question for another post.

Thursday, November 5, 2009

On leading a (philosophical) discussion

I recently had occasion to reflect on the question of how to lead a good discussion. What happened, in fact, was that I needed to give someone advice on how to do this, and didn't know quite what to say. I've since sat down and thought about it, and here's what I've come up with.

Ideally, the members of a discussion would be able to take care of the conversation themselves. What would this look like?

  1. People would speak clearly, concisely, and to the point.
  2. People would listen to one another, respond to one another, and build on what others have said.
  3. People would move the discussion in productive directions by raising the right questions at the right times, staying on topic, and keeping the conversation from scattering in different directions or going off track.
In most discussions, however, some or all participants can't be counted on to do all these things themselves. That’s okay. Your job as discussion-leader is to do some of this work for them, to supplement their contributions so that all the things that need to get done, get done. Depending on who the participants are, you'll have to do more or less of this work. When I've taught undergraduate classes, I've found that I have to do most of this work for my students. (This usually means speaking after almost every student contribution.) When I'm in reading groups with my peers, we all share this work. But no matter what the situation and the composition of the group, this work has to get done, otherwise the discussion won't go well.

When I lead discussions, I find myself doing three different kinds of work, corresponding to the three points listed above:
  1. Reformulating
  2. Relating
  3. Redirecting
1. Reformulating
It often happens that participants will make contributions that are interesting but not sufficiently clear. In these cases, you need to help them to articulate their point more clearly. One way of doing this is to ask them to reformulate what they’ve said more clearly. If you have no idea what they’re talking about, you may just have to ask them to say it again in a different way. If you have some idea what they’re talking about, you may be able to ask them a more pointed question that will help them to clarify their point. If you think you understand their point, but suspect that other participants may not have sufficiently grasped what it is or why it’s important, then you can reformulate the point yourself. I find myself doing this a lot when I lead student discussions, e.g. “I hear two really good points in what George has said…” or “So if I understand you correctly, Michael, what you’re saying is…”

2. Relating
In the best discussions, participants will make an effort to respond to other people’s remarks, and say explicitly who they’re responding to and how (disagreeing, asking a question about, expanding on, etc.). In many discussions, however, the participants will not do this themselves, and so you have to do it for them by pointing out how the thing they’ve just said is related to what was said before. For example, it often happens in student discussions that someone will say something that contradicts someone else's earlier remark, but without noting this explicitly. It’s important to point out that the two claims are incompatible, since other participants may not necessarily have noticed this. You can then open the disagreement to further discussion, if you think it’s important, or redirect the discussion elsewhere.

3. Redirecting
At any given point in a discussion, there are many different directions in which it could go, and only some of these will support the overall goals of the discussion. Ideally, everyone in the discussion will have these goals in mind, and make their individual contributions with an eye to the big picture. In student discussions, however, this is usually not the case, and so it’s your job to keep the discussion on track. This usually involves setting the stage at the beginning of the discussion so that everyone starts off on the same page. As the conversation goes on, you may need to pose questions or suggest topics of discussion to the group. You may also need to interrupt the movement of the conversation if it’s getting stuck or moving in an unproductive direction, and bring discussion back to the things it’s supposed to be about. And you may need to correct people who are going about the conversation in the wrong way – talking too much, not listening to their colleagues, not being polite and respectful, etc.

In my experience, a good discussion is one in which many people think better together than any of them could on their own. But this means that even in situations where you are responsible for leading the discussion, you can't decide in advance how the conversation is going to go. Of course, if you’ve led a discussion on the same topic with students before, you may have some idea of the sort of things they’re going to say in response to certain questions. And there’s nothing wrong with having a plan for the sort of discussion you want to have, the topics you want to cover or the questions you want to pose. But you also have to be open to the possibility that things will go in a different direction than you planned. It can be difficult to strike a balance between achieving the goals you’ve set for the discussion and allowing it to develop spontaneously and organically. But the best discussions I’ve led are the ones where my students surprised me, and I managed to be flexible enough to keep the discussion productive even as it went in a direction I hadn’t expected.

Tuesday, May 5, 2009

The Turing test and the Chinese Room

John mentioned the Turing test in comments on my last post, so I thought I'd say a few words about it. The Turing test was Alan Turing's attempt to define objective criteria for answering the question, "Can machines think?". (See Turing's 1950 paper, "Computing Machinery and Intelligence".) Essentially, the criterion Turing proposed was that a machine is intelligent if it can persuade an external observer that it's intelligent.

In this post, I want to summarize one of the most influential critiques of the Turing test, and explain what I think is right about it. The critique is from John Searle's 1980 paper, "Minds, Brains, and Programs", and it's known as the "Chinese Room" argument.

Searle proposes the following thought-experiment: imagine you've been placed in a room with a large pile of papers printed with characters in a language you don't understand—call it Chinese. Through a slot in the wall, someone occasionally inserts some more pieces of Chinese writing. You've been provided with a detailed set of rules (in English) for correlating one set of papers with the other, based only on the shape of the Chinese symbols. No prior knowledge of Chinese is required to follow these rules: you just look up the symbols on the papers that come through the slot, choose the characters that the rule-book calls for from your stock-pile, and push these papers out of the room through the slot.

Unbeknownst to you, the papers being inserted into the room are questions written by native Chinese speakers, and the papers you're pushing out of the room are answers to these questions. The set of rules you're following is so sophisticated that to those outside, the room (or whatever's inside it) appears to be carrying on a perfectly fluent conversation in Chinese. You are equally unaware of the fact that those who designed the room and wrote the rules that you're following consider the papers that come in through the slot “input,” the papers you push through the slot “output,” and the rules you're following a “program.”

This “Chinese Room” is a computer, albeit a strange one: instead of magnetic memory and a CPU made of silicone transistors, it's built out of stacks of paper and a human being. Nevertheless, the room functions in the same way that a digital computer does: it manipulates and responds to symbolic input according to purely syntactic rules.

The strange computer in Searle's thought-experiment is passing the Turing test with flying colors: it appears to Chinese-speaking observers to be fluent in Chinese, and to understand the questions they are putting to it. The question is, is this appearance of understanding sufficient to show actual understanding?

Searle argues that it is not, on the grounds that the human being inside the room doesn't understand a word of the conversation she is participating in. As Searle puts it, "whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything."

This argument has been met with a number of objections. Defenders of the Turing test have generally argued that Searle hopelessly muddies the issues by inserting a human homunculus into the workings of his Chinese-speaking computer. The Chinese Room argument seems to rely on vague, introspective intuitions about what and how we understand, intuitions that may or may not be empirically accurate. The whole point of the Turing test is to avoid such subjective definitions of understanding. According to its defenders, the only objectively valid criterion is the results the system produces; if a computer’s performance of a given task is indistinguishable from that of a human being, then the computer understands the task just as well as the human does, regardless of what it "feels like" for either.

The answer to this critique, and the true strength of the Chinese Room argument, lie in a further point that Searle makes about the meaning of the computer’s inputs and outputs: “the formal symbol manipulations by themselves… are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics.”

The computer only appears to be thinking, to know and understand things about the world, because its inputs and outputs are symbolic, and thus appear to have a meaningful content. But symbols have no content in themselves, for in themselves they are not even symbols, but only things—ink on a page, or colored pixels on a screen. They are meaningful only for beings who can interpret them as symbols, and find a meaning in them. Searle’s crucial point about the computer is that it is not such a being. The Chinese Room’s inputs and outputs appear to its human observers to be meaningful Chinese sentences, but they have no such meaning for the computer itself.

The “correctness” of the computer’s outputs, its apparent fluency in Chinese, lies entirely in the interpretation given to these outputs by its human interlocutors. The Chinese Room itself is utterly incapable of distinguishing between correct and incorrect outputs, since for it these outputs are nothing but physical effects of physical inputs, the end of a complex chain reaction. In our attempt to give an objective definition of understanding, we have ended up attributing to the computer properties that are only in the eye of the observer.

This can be seen even more clearly if we imagine that, instead of “conversing” with human interlocutors, the Chinese Room exchanges inputs and outputs with another, identical Chinese Room. There should be no temptation, in this scenario, to say that Chinese is being spoken or understood. There is here only a mechanical exchange of inputs and outputs, one computer triggering an automated response in the other, in a closed feedback loop. This is not a conversation.

Defenders of the Turing test might insist, of course, that to a (human, Chinese-speaking) observer it is indistinguishable from a real conversation. But this would again be to import into the situation an outside observer for whom the signs being exchanged are meaningful. The Chinese Rooms are completely incapable of generating meaning on their own. As Searle puts it, you cannot get semantics from syntax.

We see here a general problem with attempts to give an “objective” account of understanding or subjectivity. Turing-types are right to criticize introspective accounts, which would reduce the meaning of my situation to the meaning it has for me. However, we are no better off if we exchange the introspective standpoint for a purely external one. We will then arrive only at a description of what the situation means to the observer, when the whole problem was to describe (objectively) what it means to the system being observed.

If we take the observer’s perspective for granted then we only postpone the problem we set out to solve, for the observer is also a thinking being, and her perspective must also be accounted for. The claim that computers are thinking can only be sustained by appealing illicitly to the perspective of an observer who is not a computer, whose thought is more than an algorithm.

Tuesday, March 3, 2009

The Mechanization of the Mind

I read a good book over the holidays: The Mechanization of the Mind, by the French philosopher Jean-Pierre Dupuy. (Coming out in paperback at the end of May, if you're wondering what to get me for my birthday. ;-)

It's a history of Cybernetics, the early 20th century intellectual movement that gave rise to contemporary cognitive science (as well as various other fields like information theory, artificial intelligence, and analytic philosophy of mind). Thus it offers a kind of recent pre-history of today's prevalent assumptions about the nature of the mind and the brain.

The most interesting thing I got out of the book was the fact that the computational theory of mind did not, as I had always assumed, arise as a consequence of the invention of the digital computer. That is, it's not the case that we first invented computers, and then started using them as a metaphor or model for the way our own minds work. As it turns out, the modern digital computer and the computational view of mind share a common origin in mathematical logic.

Here's how it all went down, according to Dupuy.

1. Frege et al. invent modern mathematical logic, which attempts to give a purely syntactic (i.e. symbolic, algorithmic, formal, "mechanical") account of logical inference.

2. The power of this new logic, along with the view that logic prescribes the "laws of thought", lead to the claim that thinking just is this sort of formal symbol-manipulation. Logicism is born.

2. Alan Turing, in his paper "On Computable Numbers, With an Application to the Entscheidungsproblem", tries to give a purely syntactic definition of logical inference by describing an imaginary machine that could write and erase symbols on an infinitely long tape, and "remember" the symbols it has recently scanned in a finite "memory". This machine scans up and down the tape, modifying it in response to the symbols it finds.

While the formal inference rules of symbolic logic had been described as "mechanical" before, in the sense of proceeding without understanding or insight, this was the first time anyone had proposed the idea that these formal inferences could be carried out by an actual machine.

In fact, Turing's machine could only be imaginary, since its tape was infinitely long. But it's not hard to see how a similar machine could be constructed with a finite "tape" - and indeed, it wasn't long before John Von Neumann proposed such a machine. And so the computer as we know it was born: a machine whose instructions are stored in its own memory, and so can be modified by the machine's own activity; a machine in which hardware and software can be distinguished.

4. Since Logicism had already identified thinking with the symbol-manipulation of formal logic, two conclusions seemed inevitable:
a) Machines are capable of thought;
b) The human brain is itself -- or at least, can be adequately modelled by -- a Turing/Von Neumann machine, a computer with a very complex program.

Thus, the computational theory of mind was not the consequence, but rather the antecedent of the invention of the modern computer. The idea that thought consisted in the purely formal manipulation of symbols gave rise more or less simultaneously to the idea that a machine might be able to think, and that the human brain must be such a machine.

Thursday, February 5, 2009

Sick of the brain

Can I tell you how sick I am of the Brain? The 90s were the decade of the Brain and it's all been very exciting... But how do we get past the brain and back to personhood? I know at least N believes I worship the god of Science at an electronic shrine (! :) and while I do take quite an interest in cognitive science, I getting more and more wary of mentioning it in my classes. At the same time, I find it to be of increasing political import to raise for discussion.

For example, today, I read my students latest paper proposals... Soooo many of them are convinced that the mind is the brain, is a person. What do I do?!? Seriously... I mean, for my own sanity! If I read one more paper that cites a biology text book to refute Aquinas or Descartes, I'm going to scream! I encourage them to draw examples from popular culture or media... As a result, several of my students cite this persnickety article that is supposed to disabuse me of the view that one should study philosophy before 1985.

I just feel like I am totally failing if they leave my class thinking that... So far, we've only read Aristotle, Aquinas, and Descartes. I just can't convince them that describing personhood involves describing anything but the action of a collection of machine-like parts.

As if my students weren't enough, my therapist recently recommended
Louann Brizendine The Female Brain. (My therapist and I are now "breaking-up") The book was trashed by Nature as "psychoneural indocrination," and there would be no need to harp on it if it weren't still selling and popular. But it is... and I think it deserves a good thrashing from a phenomenological level too.

Brizendine essentializes both gender and brains. She basically claims that there are two types of brains and, equivalently, two types of people. One, the female brain, was "marinated" in estrogen (her words) early on and thus capable of all kinds of interpersonal connections that that the un-marinated male is not. Male brains lead to semi-autistic behavior and definitely to infidelity. If you, perchance, do not identify with her phenomenological account of female experience, then you are just less female and more male (or perhaps just less human, I'm not sure). (Lesbians, apparently, were just improperly marinated.) She mentions some vaguely feminist concerns in the introduction and conclusion, but declares that she had to put aside political scruples in the service of scientific truth. The critiques she imagines seem limited to those possibley posed by second-wave feminists; she makes no mention of third wave feminism, gender studies or queer studies.

Not only does Brizedine essentailize genders but she essentializes brains. Sure, these brains show up in bodies, but just as isolated organs popped inside a ready-made shell that in itself contributes little to identity, knowledge, or experience.

What do we do with this sort of material? Again, on the one hand, if this is bad work on so many account, perhaps it doesn't deserve refutation or further discussion and publicity. On the other hand, if my students' papers are any indication, then this book is just a timely reflection of the dominant culture... and in a way, not to spend time refuting it is politically lazy.

My students are very concerned about animals and the environment. I feel I should be able to exploit this... at least as a distraction from The Brain.