Tip: PEP-Web Archive subscribers can access past articles and books…
PEP-Web Tip of the Day
If you are a PEP-Web Archive subscriber, you have access to all journal articles and books, except for articles published within the last three years, with a few exceptions.
For the complete list of tips, see PEP-Web Tips on the PEP-Web support page.
Levin, F.M. (1997). Commentaries. J. Amer. Psychoanal. Assn., 45:732-739.
(1997). Journal of the American Psychoanalytic Association, 45:732-739
Fred M. Levin
The problem regarding consciousness, simply stated, is that no one has satisfactorily explained why consciousness is necessary (Shallice 1988p. 381). Mark Solms bravely raises a number of questions about consciousness. After commenting on his paper, I will survey some interdisciplinary perspectives and then attempt to describe what it is that consciousness might be doing.
For Solms, and most psychoanalysts, processes within mind which are usually labeled subjective experience are no less real than so-called objective observations of the external world. Solms further argues, and I agree, that all perception involves conjecture about events, localizable either in the universe outside or in the one inside the self(Levin 1991; Posner and Raichle 1994; Lassen 1994).
The consciousness debate is muddied, however, when Solms introduces the philosopher John Searle (1995b), who asserts, among other things, that consciousness is a mystery and that there could be no such thing as an unconscious. These untenable assertions of Searle are best answered by the research of psychoanalysis, including especially the research of Shevrin et. al. (1996) which meticulously demonstrates the neurophysiological signature of unconscious process (as distinguished from conscious processing).
However, it would be wrong to dismiss Searle completely. His so-called Chinese room argument (Crane 1995p. 132; Searle 1995a), for example, actually supports the application of psychoanalytic frameworks to questions about the nature of conscious processing (see below).
The Chinese room argument states that a man who exists inside a room, who uses various algorithms to translate English messages passed to him into Chinese and pass these back, is not undergoing the same mental states as someone who actually understands the Chinese language, no matter how good the product (Crane 1995p. 132). Searle uses this argument against those who argue what he calls the strong case for artificial intelligence (AI), namely, that computers (like the man with translating algorithms) can usefully model mind and brain. For Searle the strong case for AI is wrong because it asserts that the differences between human brains and computers can be ignored without consequences. Here I agree with Searle.
[This is a summary or excerpt from the full text of the book or article. The full text of the document is available to subscribers.]