Dawson, we're getting close, the end is near.
NOTE: refer HERE to P.1, P.2 and Dawson's prior response
A few things you stated:
“Again, not [a correspondence] between concepts and reality as in “the thing in itself” (Kant’s “Ding an sich”), but between concepts and the things which we perceive.” That couples nicely with “There is reality, and there is our consciousness of reality, and there is the relationship between the two.” Couple that with “whereas according to representationalism we perceive “appearances” of things, i.e., not the things themselves.”
If I gather you correctly then, what you call “the thing itself”, is that which exists (mabye a bad word there) in perception, not reality. You grant (as I would as well) that there's a world out there, but that we do not (in speaking of truth and facts) mirror the way the world is in itself. I would agree with that as well, we are certainly not mirrors to reality. Furthermore, if I gather you properly, you're stating [e.g.] that there are rocks in reality, however the truths that we speak about them relate not to them as they are in themselves, but to them as they relate to the relationship between us and reality, i.e. in perception. I have no overwhelming issue with that either - at least on a rhetorical level.
To spin this another way, would you agree with the statement that, yes, the world causes us to have certain beliefs, but it does not give us the reason? In this way we supply the concepts of ‘objective’, ‘grayness’, ‘rock’, etc., but that the world is none of these things...
“I do not think that “the world causes us to have certain beliefs,” as if our minds were passive balls of clay manipulated without our own active participation. Cognition is both active and volitional.”
That really wasn't what I was getting at with the comment (that our minds are passive balls of clay) – let me expound. The volitional/active portion of cognition is what supplies the reasons for believing the things we do (I'd suggest, using your language). Let me throw this out there; I'm with Richard Rorty when he says that beliefs are not representations, but rather habits of action; and that words are not representations, but tools. Furthermore I'd add that the manner with which we define things to be (or talk about things, the nature of our discourse) is related not to the way the world is in itself, but according to how things best suit our current needs and interests. To say that the world causes us to have beliefs is simply to recognize that there is a world out there that's ultimately going to push us around in ways that are not under our control. In that way it will push in certain directions, cause us to have certain beliefs wherein the reasons for those beliefs are our own.
I think where there would ultimately be a hang up between you and I is your idea of an objective process of identification as a means of ascribing truth, and how far that stretches. Secondly, I don't see the need (as a pragmatist) to hold to the axioms you do. The whole idea of a correspondence between concepts and perception (and the above ascribing of truth) seems to leave out what I think is a better idea in (say) Davidson's idea's about triangulation – but that's a whole other conversation. Since we're not arguing anything specific per se, I'm happy to let all this lay for now and simply say we come at things a bit different, yet both agree that Sye is full of shit.
Finally I've seen two people now make comments that say something along the lines of the following (in this case by openlyatheist):
“As for the axiomatic nature of the senses; whenever an apologist pulls some such Plantinga-type move, I simply point out that anyone attempting to convince me my senses aren't reliable makes use of those very senses in presenting their argument to me.”
NOTE: this comes in a couple variations. I wouldn't try to suggest that one's senses are not reliable, the question I had was how one knows they are. Essentially the question aims at putting forth an account of the senses, or a proof of them. Of course, I wouldn't ordinarily ask someone this, but it seemed to apply in the notion that "consciousness is consciousness", taken as an axiom.
This all hangs upon what one means by the senses and consciousness.. If one defines consciousness and the senses as on par with a mental state which aligns itself with (say) a “feeling” (as in, I feel that I'm conscious as I'm perceiving) as opposed to a more behaviorist/objective approach that simply says consciousness is “what we observe” [simply] in other people as they interact with their environment, then you're begging the question and/or presupposing that someone else has such feelings. This runs along the lines of a comment I made earlier in that, you cannot prove with certainty that someone else loves you, you cannot prove they're experiencing a certain mental state. The only thing we can say is that “behaviors” we associate with love are reflected in a certain person, and from that infer certain behavioral patterns from them in the future. In other words I'm making a distinction between consciousness as an internal state, and consciousness as an observed behavior. So the best we can say is that the behaviors we associate with consciousness are present in person “X”, or thing “Y”
If you/we say that to be conscious is simply to perceive something, and steer clear of referring to perception in terms of internal states of affairs, then I have no real problem. Again, since there's no way to prove that something is conscious in terms of referring to internal states, no way to prove that I'm not just some mindless meatpuppet spouting out random words and actions. Let me give an example, let's suppose (as the wonders of science will surely allow) that at some point artificial intelligence becomes so advanced that they create a human being – however, it's not organic, but electronic. Supposing that it's so advance that it can react to anything in it's environment as we do, it can learn, react to pain, take pleasure in a pair of nice tits, (or rippling pecks), i.e. it reflects all the same behavioral patterns as a real person does, would you say this piece of AI is conscious? If not, why? If you would answer no, then in fact it would seem that you are granting and/or presupposing that people have internal states that they feel, even though you can't actually prove or account for it and we're back to having some baggage on hand.
Or perhaps this is an even better thought experiment. Suppose that it's sometime in the future I described above with AI, and you get a horrible cancer in the brain that keeps spreading. As the cancer spreads it's cut out, and they start systematically replacing parts of your brain with equivalent silicon parts that function in the same way that the removed organic brain matter did. Will there come a point in this scenario that you stop being conscious because you are slowing becoming nothing more then an advanced computer? i.e. will there come a point when you have no conscious recognition of internal states, even though you still appear (to everyone else) to be the same person, or in the minimum a person that thinks, talks, reacts in the same manner everyone else does?