Let me get back to more technical philosophy for a moment. I've often mentioned the connection between Derrida and Peirce over the role of vagueness in philosophy. Vagueness is, of course, key for Peirce and I've suggested that a lot of Derrida's philosophy arises naturally out of Peirce's notions of semiotics and inquiry. However one big issue is the nature of aporias.
Now for Derrida aporias represent not just logical contradictions. They represent blind spots in our understanding. Thus he attempts to subvert arguments not by finding false premises as such but by finding contradictions. This ought remind us of Socrates and his style of doing philosophy. You'll recall that Socrates proceeds by getting people to submit definitions which he then tests. Eventually the dialog ends when each definition that his interlocutors have provided lead to dead ends. These dead ends are blind spots because the point is not to simply disprove an argument. Rather it is to suggest that our definitions are blinding us to the reality of the idea.
Typically, especially in later neoPlatonic forms of Platonism, this is seen as entailing that the ideas (what the definitions were attempting to represent) are beyond the grasp of spoken definitions. That is we have to move from the realm of discourse into something beyond. For the neoPlatonists this was the realm of pure intellection that can never be captured by discourse. Thus the neoPlatonists developed a kind of negative theology of these ideas.
Now I don't want to suggest either Derrida or Peirce believed in any other realm of pure intellection. Far from it in my opinion. Both take time and finitude a little too seriously to allow that move of a kind of eternal presence I think. What I wish to point out is how vagueness relates to all this.
Vagueness is basically the idea that for any term we understand only in part. What is left unsaid is not left up to us to decide. Allow me to use a Peircean example. "A man whom I could mention seems to be a little conceited." Now while I don't know who this man is, I can't merely pick a man to fulfill the meaning of the sentence. Yet clearly there is a real man who fulfills the truth of the sentence. So I am left in a place of undecidability.
I can attempt to reason my way out of this place by making guesses and testing them. I can attempt to resolve this vagueness through a process of dialog. Both of these depend upon a process of inquiry though. Thus vagueness ought move us to a process of inquiry where we attempt to make our ideas both clearer and more determinate.
So how does the aporia relate to all this?
What I believe Derrida is attempting to do is to show that definitions or use of terms in arguments end up taking vague terms and treating them as if they were not vague. That is they attempt to subvert the process of inquiry by making terms determinate without authority, as it were. The problem is that as this is done the "filling in" of the empty places in the vague terms or concepts are inconsistent. It is, I think, always possible that they are consistent but simply wrong - however Derrida (and Socrates) attempt to find those places where they are inconsistent. This is possible because typically this "filling in" is not done consciously.
That is we may not be aware that some terms are as vague as they really are. Or, we may give lip service to their vagueness and then in our practices treat them as far less vague than they are. Now as we do this since we're not quite as aware of what we are doing as we ought be, we end up with inconsistencies. Reason (the hallmark of which is a kind of consistency) is thus subverted by irreason. Reason ought be wrapped up primarily with inquiry and thus a bringing to awareness of the gaps in our reasoning. Instead irreason fills in these gaps, giving an unsolid firmament the appearance of solidity and stability.
BTW - I really should present a great quote from Peirce scholar Joe Ransdell that relates to this. It's from his paper "Peirce and the Socratic Tradition in Philosophy."
In its origins Socratic dialectic probably developed as a modification of practices of eristic dispute that made use of the reductio techniques of the mathematicians, perhaps as especially modified by the Parmenidean formalists. Socratic dialectic differs importantly from the earlier argumentation, though, in at least two major respects, first, by conceiving of the elenchic or refutational aspect of the argumentation not as a basis from which one could then derive a positive conclusion either as the contradictory of the proposition refuted, as in reductio argumentation, or by affirming the alternative because it was the sole alternative available, but rather as inducing an aporia or awareness of an impasse in thought: subjectively, a bewilderment or puzzlement. Second, it differs also by using the conflicting energies held in suspense in the aporia as the motivation of inquiry.
Correct me if I'm wrong, but an "aporia" for D is a necessary part of producing a text. Socrates would show that definitions are never satisfactory, but I think he believed that to be a failure in human reason. Where, I think, Derrida believed the "failures" are intrinsic to language production. The "failures" for Derrida aren't lamentable as they are in Socrates, displaying merely the ignorance of people who think they know something, but are the hallmarks of the great texts in philosophy. Derrida thinks the aporias tell us something profound about what meaning is rather than our ignorance.
The way Socrates was taken by the neoPlatonists is pretty much the way Derrida does things. Definitions for Socrates and other writing are never satisfactory because of the intrinsic limits to language production. Of course the counter argument would be geometry which Plato puts forth as knowledge. But I guess Godel made even that counter-argument impossible.
So I don't think the distinction you make necessarily holds. But that gets into the issue of how to read Socrates. (And of course aporias are primarily a part of the early dialogs)
I don't really understand why people treat vagueness as a characteristic of terms or words. As far as I can tell, vagueness is just a failure of usage: i.e., a problem with a) a person communicating their own intentions, and/or b) a listener in grasping or accepting the intended scope of the locutor's words. If the context is right -- i.e., if the background information of both myself and my listener are the same -- then the pragmatic meaning of "A man I could mention" is perfectly clear, and not at all vague.
"then the pragmatic meaning of "A man I could mention" is perfectly clear, and not at all vague."
That's true, at least superficially. If I were going to argue Derrida's case I'd say that a text under consideration has to have some kind of critical mass before you could meaningfully talk about an "aporia". Building on the Godel analogy (which I'm not sure I buy, but anyway) a formal system has to be complex enough before undecidability applies. There are formally decidable systems. If you just throw out an axiom here and there randomly and call it a system, it might end up being complete. Perhaps somewhat like that we might say throwing out a few trivial sentences hardly display undecidability.
Of course, in that case we have to ignore the fact that a larger language is necessary to consider the sentences as standing on their own and the analogy breaks down. We might also draw a parallel to Heidegger's project and say that if our dasein is a human infant we probably won't learn much about phenomenology. A
In fact, what you say about holding "context" constant is key. Because afterall, what Heidegger is interested in is Being. Dasein is a way to explore it. In other words, all that stuff in the background that is required for presencing, we might say the "context" is what Heidegger is after. And that's true as well for Derrida. Derrida isn't interested in this or that sentence, or this or that book. He's interested in the "Text" or, to loosely translate, the "context" you are talking about. In order to see the "context", to reverse engineer it, requires a big enough sample size to represent it. So it kind of goes without saying that if "context" is "fixed" then any undecidability would be a mere practical problem. I don't think Derrida would have disagreed. But in the end, it's the "context" at issue, not the individual sentence.
The reason Peirce follows the method he does is because of his background in science - especially physics. There this use of vagueness is appropriate because the signs investigated aren't speech acts but understanding of the universe.
Whoops. Accidently hit post.
Anyways for the example in question one certainly could look at it in terms of intents of the utterer, which is what you are doing. But by focusing not on the utterer but on the interpreter I think Peirce establishes things much better.
So your point would be true if, for a general semiotics, we considered signs in terms of their origin. But in general that ends up causing problems. Now one can talk about the ideal origin and ideal end of interpretation. (And Peirce does this) But since as a practical matter we find ourselves always between those two extremes it seems more helpful not to take that approach.
The other problem is with intents. If one takes intents as some immaterial ideal entity ala Fregean meanings then what you say works fine. However Peirce doesn't do that (and I think there are problems with this general Fregean approach)
Gad, we're getting away from my preferred manner of explaining vagueness by addressing more fundamental issues. I guess I can say something about them, but I just wanted to point out that we seem to be straying.
The collapse of distinctions, i.e., between self and environment implicit in Dasein, threatens to collapse still other related distinctions, i.e., intent v. interpretation. That just leaves us with all kinds of muddle-headed messes. And the root cause, I take it, is that we overestimate the ontological significance of meanings. I.E., if I were really determined to do so, I can make a distinction, arbitrarily, between the color "reddish orange" and the color "orangeish red". At the time, I might even recognize patterns associated with each which are, on cold and honest reflection, indistinguishable. But while this might throw off my ontological bearings, it will never survive in the heads of others.
I take it that you're saying something like, meaning is regular use in context, and then pointing out that we'd have to be exhaustive lexicographers who observed all possible uses before we could understand a meaning. But that presumes a linguistic descriptivism which I don't share.
Focus is one thing, analysis is another. I didn't mean to suggest that interpretation is not a part of the science of meaning. Quite the opposite, meaning must involve interpretation, intention, and a semiotic medium. A proper philosophy of language is a kind of semantics of the revolving door, where at one moment we check the intent, the next we check the sign, and the next we check the interpretation; and each of the three has to connect to the others somehow.
Failures in our revolving door semantics may create at least two kinds of aporia. Vagueness can involve the failure of the speaker to make her precise intentions clear to the interpreter through communicative channels, and where either a) the speaker recognizes how vagueness has come about, and knows immediately how to correct it (which is a problem with encoding intent into intelligible language); and b) the interpreter decides the speaker's intentions are deficient in some way, though the speaker herself doesn't (which is a problem with decoding intent relative to the interpreter's understanding).
Ben, let me put it an other way. You are only considering vagueness roughly in the context of speech acts or something similar to it. But consider objects that give themselves to us. Now something is being "communicated" although clearly not in the sense of a speech act. (i.e. we're not talking intents) But it seems to me that the same situations that occur with speech acts happen here.
Put simply, I'm not sure why we need limit discussions of vagueness to speakers and intents.
Clark, fair enough, surely our theory of linguistic meaning might stand to be shored up by a theory of non-linguistic meaning. Grice, who was as intention-obsessed as anyone else you can find, would be the first to admit that there is non-linguistic meaning: i.e., when we say "That smoke means fire", we're likely not talking about someone's intentions.
But this is still missing something. There is such a thing as an interpretive goal, and therefore, an interpretive intent. Interpretation, in many cases, is also intentional.
Example: pretend that you're a lone ranger in a forest somewhere. Let's also say that smoke meant one of two things: either fire, or the operation of a portable smoke machine. Let's also say that, for whatever reason, you have great incentive to act on a fire, but great disincentives to act if it's a smoke machine. Here, your interpretive goal is to decide what the smoke means, and specifically, whether or not it means "fire". But the signal itself is ambiguous. And there is no hope of solving the ambiguity on the basis of looking at the sign (smoke) itself: you need more information to satisfy your interpretive goal. In this case, you're stuck with what I called aporia (b).
Still, there are some cases where interpretation is without any obvious goal. If it turns out that these cases share the same features as the ranger case, then the pragmatic theory of vagueness is in trouble. IE: Let's say that I'm reading a novel. There are no serious consequences if I continue to read it, or if I put it down. I'm just reading by habit, in a relatively zoned-out state. I'm not even reading to understand it -- for reading for the purposes of understanding is a kind of interpretive goal, which by hypothesis, we've dispensed with. But that kind of "reading" isn't interpreting in the first place, since interpretation really is nothing more than taking information and trying to understand it. It follows that interpretation is goal-oriented in a deep sense.
Certainly I don't have much trouble with what you outline above. I just don't see how it has anything to do with the sense of vagueness I outlined. Certainly how things give themselves to us are affected by such things as mood, our projective stance and a lot else. But this doesn't mean that the thing as given (or perhaps better unveiled) is not given as a vague representation.
So I guess I'm confused as to your ultimate point.
Clark, if I'm right, and if we abandon the notion that vagueness has much to do with "terms", then whatever "unreasoning" we're stuck with is limited to those two aporias which I mentioned. And if we presume that even the slightest bit of communication is possible in the first place, then these aporias are easily fixed.
OK, I just wasn't clear on what you were exactly arguing for. I don't mind saying there are two kinds of vagueness but I think it incorrect to say that there is nothing like vagueness in signs.
To argue as you did in #4 that vagueness is just a failure in "a person communicating their own intentions" or "a listener in grasping or accepting the intended scope of the locutor's words" then, as I said, we're unable to make sense on non-communicative signs. i.e. we seem unduly limited.
That's why I don't quite see how your earlier comments (in #8) get us anywhere. You bring up intentionality in terms of goals of inquiry. But the problem is that inquiry often doesn't know where it is going. The approach you bring demands that the ends be known in advance. In Derridean terms it seems to me that you are trapped in a logic of presences. (While Peirce doesn't use that language a similar critique could be made in terms of his thought)
To be clear the advantage of putting vagueness in terms of the sign rather than the interpreter or the utterer is that it makes it very clear what one is doing. One is clarifying the effect of the sign. i.e. its determination. To move the sense to speaker or interpreter then we are forced to assume that all signs are already fully determinate. But what does this mean? How could a sign be fully determinate? Are we speaking of some "public" meaning? (Corporate sense in terms of Searle's speech acts) Are we speaking of some ideal quasi-platonic meaning ala Frege or Husserl?
There is also then the issue of time which always looms large (and is so frequently ignored in discussion of speech acts). So if I utter the sentence, "a man is coming to meet us" we could say that the identity of the man is vague. That is we can't simply pick a man. But, at the time of the utterance the identity of this man isn't fixed yet. Now we can appeal to some hypothetic matrix of expectations, the way most Internalists do with intents. Or we can take what is in my mind the much simpler approach of just recognize that we have a sign that is not fully determined and which is not under our freedom to determine.
Put simply, I just don't see what moving the location of vagueness gets one and I see that one loses a lot.
(As to why, it's that problematic of moving from intents into these explicit expectations which always seem rather ad hoc to me)
Just to add, I think what we have are different models of signs and the question then becomes which one works better. I see this as primarily a kind of question of categories.
You're right. I should have worded (b) in a more generic way, i.e., as "an interpreter in grasping or accepting the scope of the associations of some signs". But even when formulated in this way, both intention and interpretation are still ubiquitous, because the relevant aspect of interpretation that we're looking at is goal-driven. Reading natural signs, as I suggested, can be expressed in those terms.
Rather, the limiting case would be of a person who engages in free association; that wouldn't be goal-driven in an interpretive sense.
It is true that goals need to be known in advance, at least in some sense.
I don't think that we need to assume that signs are "fully determinate". They're not, because determination is a matter of interpretation and intention. The entire point is to stop talking about signs as if they were or weren't determinate. To speak of "determination" is to speak of interpretation and intention with respect to some sign, not signs alone.
To drive the point home: if signs were "assumed" to be fully determinate, then it seems likely that the two aporias I listed would never arise. In addition, we wouldn't have to exercize either felicity or charity to "communicate"; but then again, we would have to be computers to do that. But we're not.
I don't really understand what the problem is with the "a man is coming to meet us" example. The proposition is indeterminate because the intentions were indeterminate and could only be met with the so-called indefinite article. Clarity must be packaged with the use of either definite articles or names, or at least with the uses which correspond to either. But this is still a feature of usage, and any semantic features (i.e., when we speak of the definite and indefinite articles) are parasitic upon use.
The virtue of the pragmatic account, I think, is that it is accurate. Maybe it isn't simple, and maybe people argue ad hoc, but neither of those considerations are really that serious.
If, instead of "goal" we put a kind of telos of final interpretation I might be inclined to agree. After all the goal of a speaker might be a communication but the telos of his words might be something quite a bit more. After all in any communication we communication more than just what we are conscious of intended. A misogynist for example might communicate that fact about his self.
The reason I make this distinction is that there is that towards which a sign tends and that towards which we try to aim it. We err if we conflate the two. That is, we attempt to use signs however we can not master signs.
If by "pragmatic account" we mean the Peircean account then the emphasis is on signs. This "towards which" that the sign is destined is called by Peirce the final interpretant. The actual effect of a sign is its dynamic interpretant. And of course most signs have numerous possible dynamic interpretants. In addition to these two interpretants Peirce described the immediate interpretant which is its possibility.
Now the utterer attempts through his beliefs regarding immediate interpretants for a sign to produce a desired dynamic interpretant. But due to both fallibilism and limited knowledge they can not produce this mastery.
The role of the interpreter as I understand it is, through context, attempt to discern the intended dynamic interpretant. But of course due to the nature of the sign the interpreter can learn quite a bit else. All of which are part of the hermeneutic process.
Now certainly the attempt to capture the relationship between the utterer and the desired dynamic interpretant is a valid goal, but hardly the only one.
Regarding Peirce's example. It's certainly true we can consider two signs. One is the sentence or proposition as a sign. The other is the utterer and their goal as a sign. The sense of determination/indetermination you pose certainly works for the goal of the interpreter as a sign. However the value in the Peircean approach over the more mind-centric approach you espouse is that it works for signs in general.
The value of the Peircean approach is that it encompasses what you call the pragmatic account but explains far more. So to suggest that the Peircean account is "inaccurate" seems quite erroneous since it is simply far more general than what you outline, concerning as it does signs in general rather than simply signs in terms of intentions. After all some sign processes can't reasonably be considered in terms of intentions. (Consider say physics as a semiotic)
I think there have been two related themes that we may have crossed wires over. My original intent was a) to make a stand on the relation between vagueness and signs, but in the process we've also discussed b) the relation between signs and natural events. I think your points are very interesting, and they create in me a desire to read Pierce and the Pragmatists, as well as increase the scope of my studies. (I should point out one ambiguity: when I spoke of my support for a "pragmatic theory of meaning", I meant it in terms of the field of Pragmatics, and not of support for the Pragmatists. Sorry for that.)
And though what you've proposed may be interesting as a research project that is parallel to, and relevant to, study in the philosophy of language, I also think that the study of merely the sign and the interpretation falls outside of it. A philosophy of meaning must be a semantics of the revolving door, that is, of sign, interpreter, and intention -- otherwise it isn't a philosophy of meaning. And it seems to me that all languages are either vehicles of meaning, or they are mere contrivances, that are more a mockery of language and meaning than anything else. Insofar as the study of sign and interpretation is a study within this revolving door semantics, we are in good shape to study both vagueness, and language in general. But insofar as we try to escape it, for instance by putting it to use in the decoding of natural phenomena, while we may be engaged in sensible semiotics, are doing ourselves a disservice as philosophers of language.
I think that the Piercian toolbox helps us in that, insofar as I understand what you've said. That is, we must be interested in explaining the successes and failures of intended-but-conventional dynamic interpretants (via felicity), and the successes/failures of intended-but-idiosyncratic dynamic interpretants (via charity). But other features of analysis, like the final interpretant (which I take to be the successful decoding of intentions from another) and immediate interpretant (which I take to be the interpreter's understanding of conventional associations) are quite salient and useful. What I would insist upon is that, linguistically, there is no such thing as an immediate interpretant without a final interpretant. Non-natural meaning is not linguistic, and it is only meaning insofar as we anthropomorphize the world, and decide (out of whimsy) to say that causation is intention. This may be metaphysically suspect, but it at least has some place in a semantics of the revolving door.
Ben, I'll say this, that you've perked my interest in why to expand upon vagueness in words instead of in the interpreters. Although, as should be clear from my comments, in Peirce one can never separate interpreters from signs given the nature of his semiotics. (This isn't true of all forms of semiotics, for sure, and certainly not most philosophies of language which tend to implicitly adopt a 2-place logic)
So I've enjoyed the discussion and have been thinking of a post to try and tie everything together.
To add, when you talk about "vehicle of meaning" there is a lot in that. Communication is usually seen as this "vehicle" that "conveys" the meaning. But this means that one has to consider the problem of both replication as well as what meaning is. That is one can't simply hold those two issues in abeyance while one talks about the "easier" issues in language. That is, of course, what is typically done. Peirce doesn't do this which is why he adopts a general logic of signs.
What one arrives at when one considers the broader problem of semiotics ends up coming back to that simpler issue of language. I've been meaning to discuss this in the context of Davidson and his limiting to verbal language. But just haven't had the time to do the topic justice.
I'll just say that when you say, "linguistically, there is no such thing as an immediate interpretant without a final interpretant," you immediately illustrate what I take to be the most glaring error in most philosophy of language. It is what Derrida calls logocentrism. I recognize Derrida tends to be a pariah in these circles, but that's in part because to him intents are just an other text and there is no ideal meaning ala Frege, Husserl, or others. Even if one rejects Derrida as being an unclear or muddled thinker (I'd disagree, but I understand why he gets that label) then there are aspects of the same thing one can find in Quine and Davidson even if their own solutions didn't prove acceptable. (In particular Davidson's approach to language ended up being a dismal failure in my opinion - even though I find it terribly provocative and interesting)
OK, one more caveat. When I say no immediate interpretant without a final interpretant I mean a present final interpretant. In one sense there is logically always a final interpretant, just not one that is present in any logical sense.
But however Derrida feels like using the word, "text", there is surely a difference between the ways we figure out intentions and the things we do with words. Our internal representation system and propositional attitudes may be "language-like" (as in Fodor's "language of thought"), but that's really just playing with words at the second order. What has been called a "private language" is really just our intellect which mediates between the world and our understanding of it. And the "language" metaphor seems like a less adequate description than simply saying "the intellect", so I wonder what the use is in taking the former view, except to prop up some questionable arguments.
I'm not quite sure what you mean by "a difference between the ways we figure out intentions and the things we do with words." It seems on the face they are different. One deals with production and the other interpretation. Yet, as we all recognize, in any production we are simultaneously doing an interpretation. When I write my words I'm thinking about how they will be interpreted by making an interpretation of them myself. I may look at what I've written and rewrite it as not expressing what I want. But to do that I have to interpret them (and arguably make an interpretation of what my intents are) I think what Derrida would argue is that the way we figure out intentions are wrapped up in the way we do things with words and vice versa.
As to "private language" I agree it acts as mediator, but I'd just argue it is never private. (At least no more than any language) So I'd just say language (or in general signs) mediates. The problem with using language (as say Davidson does) is that language as typically discussed is too narrow. Thus an appeal to signs by those focused on semiotics. For Peirce he focuses in on trace, arche-writing and so forth which ends up being the same thing. (He uses these metaphors to avoid some of the theories of signs and so forth that one finds in philosophy - but trace is basically sign in the Peircean sense rather than the Saussure or Morris sense)
My only point in this latest post was to say that there is a difference between decoding transmissions (however much merit the metaphor of "transmission" has), and judging intentions. That is to say, there is a difference between the two kinds of interpretation, at the very least. If there weren't, then we'd be perfect communicators. Every possible interpretation of a message would be at a 1:1 correspondence with the intentions of the other person. But that's nonsense; I misunderstand people all the time, and am constantly misunderstood (by angsty Wikipedians, my boss, the world, my cats, etc). If we admit that there is a difference between these two kinds of interpretation, then we have reason to not take seriously the "everything is a text" creed.
I'm afraid I'm not following.
The "everything is a text" would imply that to judge anything we interpret a text. The problem of "transmission" is that in any transmission you've taken something and moved it. But if meaning depends upon context and this transmission has to maintain meaning, then that has profound implications for both the nature of texts, meaning and interpretation.
So to treat "transmission" as mere decoding as if decoding wasn't itself an interpetation when we move from merely shiften tokens to determining meaning. Put an other way, if meaning is a matter of rules regarding token change (say as in standard encryption) then certainly we merely move from text to text in a deterministic fashion.
But also clearly, as you say, we can misinterpret. Of course while that's possible with the determinate rule, it's less common. So what is going on? Well, it's because the tokens and the rules are obviously incomplete. So interpretation is the creation of rules to move from one text to an other.
That might be the shift from some sentences plus sentences about context to translate the sentence into an other. So, for instance the sentence, "I'm blue" when combined with a sentence "John uttered this" and "John is depressed" leads us to end with a translation of "I'm blue" as "I'm depressed." Change the context to remove the bit about depression and add in a bit about painting and you might get "I'm covered with blue paint."
The point is that we're talking about information transfer ultimately. And the problem is that all transmissions lose information. Intepretation is the art of adding information back to it so as to (hopefully) recreate the original. Encryption works solely because we limit the rules. But if you don't know the rule a text was encrypted with then you find yourself in trouble, although you may be lucky enough to guess. But ultimately what is the problem? It's that part of the information for the transmission is included in the context. Encryption works simply because we ship (transmit) the information in two streams. We have the communication of the rule and the communication of some text(s).
The problem with intents and so forth is that those rules simply aren't to be had.
Perfect communication entails perfect transfer of information which simply can't happen.
What I'm saying is that there is a difference between the kind of interpretation we do when we're decoding, and the kind of interpretation we do when we're figuring out intentions.
There are two stages in any interpretation. To use your example: at stage (A), when confronted with the utterance "I'm blue", we decode a set of possible messages: a) I'm depressed, or b) I'm physically blue. That is a species of interpretation driven by an understanding of conventional associations. Then we move on to stage (B), where we choose between the messages on the basis of relevant contextual features. These are, I take it, uncontroversial remarks.
But the mere fact that we have two distinct stages in the interpretive process, demonstrates that there is a distinction between decoding and figuring out particular intentions. They are not the same. That is why miscommunication is capable of arising in the first place, and why it is not particularly useful to say things like "everything is a text".
It's true that context matters to interpretation, and always will, even when we speak of bare sentences and not utterances. But there is a difference between situated and non-situated contexts. A situated context, complete with ancilliary information about speaker's goals, knowledge, etc., is necessary for interpretation at stage (B). However, interpretation at stage (A) requires a less robust context, where the interpreter is dealing with questions in terms of the conventions of language. These are qualitatively different stages, with qualitatively different sorts of information (and subsequent information loss). I may fail to interpret (A) felicitously because I have poor hearing, or bad eyesight, or am having a bit of trouble parsing a garden path sentence. But this sort of interpretation is different from (B).
I have been arguing previously that something like stage A cannot be considered linguistic or meaningful unless paired with stage B. But that should not mean that I am conflating the two stages together, and I wanted to make that clear.
I take it your final remarks -- "The problem with intents and so forth is that those rules simply aren't to be had" -- is basically alleging that there is no such thing a pragmatic rule. That would be very hasty, I think. Relevance theorists, for one, would be unhappy. Sociolinguists would be similarly incensed.
By "those rules simply aren't to be had" I mean those rules aren't had completely in a fashion that could make a determinate meaning the way say an encryption program can deal with a known text and code.
So it me injecting vagueness into things again.
My point about your two stages is roughly that the same thing is going on in both. I don't deny we can see differences. But that's not my point.
When reading a text we take some rules (some guessed) and apply them to the text to get a meaning. We then examine the meaning to make sense. But we do that with what you are calling encoding as well. The only difference is that with encoded or encrypted messages we typically have more faith that we know the rules. But the process is indentical from what I can see.
The step that is of interest, and where I see you making your point, is over figuring out what rules to use. Now for a nice encryption we do this via contextual information. (i.e. we know messages sent from some email contact will use DES) But that's the same for all interpretation. We take contextual facts to decide what rules to use to interpret the text. The problem is that with many kinds of text there simply are many ways to take the context providing us with an underdetermined situation. That is there are more reasonable possibilities than one. That's why misinterpretation is so possible and arguably common.
Given that, I'm not sure why it is not particularly useful to say things like everything is a text. In our encounters all we have are signs that have to be interpreted and decoded via these proceedures. Now some classes of signs are easier to decode than others. But the ultimate process seems the same. There's not fundamentally a different semiotic process going on when as science turns to nature from when I turn to a novel.
The difference is between wide and narrow senses of "context".
When, at stage (A), I read a sentence and decode a set of possible messages, then I'm using generalized grammatical rules (and all the cognitive faculties that understanding a grammar presupposes). The very fact that stage (A) involves the possible recognition of oneself participating in linguistic decoding by use of one's faculties, and recognition of the statement itself, indicates that there is a context involved in some narrow sense. But when I'm at stage (B), I'm using my knowledge-base in its entirety and applying it to a particular situation. That is a wider sense of "context".
The two processes involve different (though overlapping) bases of knowledge. The narrow sense uses cognition insofar as it is related to grammar. The wide sense uses unrestrained cognition, bringing in facts about persons, their dispositions, sociolinguistics, relevance, etc. The former is cognition as a "means to an end", where the goal is decoding possible felicitous intentions; the latter is cognition for the purposes of interpreting the actual intention. If you accept the preceding, then it is not possible to maintain that the two processes are identical.
Note that I'm not arguing the two processes are identical. Rather I'm arguing they are inseparable. Which is a subtle but important difference.
To clarify, decoding requires determination (interpretation) of the rules and then application of the rules and then interpretation to see if a successful decoding took place. Interpretation requires decoding (application of rules) then an other interpretation to check success. Both, if there is a failure, requires abduction to determine rules.
There are three components.
1. Deductive. This is obvious in decoding/encoding. But clearly it is true in regular language as we apply rules.
2. Inductive. This generalization process is obvious in interpretation since we have to apply situations to similar situations. But it applies in encoding/decoding as well for the same reasons, although to a lesser degree.
3. Abductive. This is the guess process as we attempt to hypothesize rules. We take the hypothesized rules and then test them via our deductive and inductive processes. This is obvious is regular language use and is the basis of the principle of charity in interpretation. But it is also part of decoding as well, especially when cracking codes.
All communication has all three aspects. Although some may be much more dominant than others.
The comments I was responding to were "My point about your two stages is roughly that the same thing is going on in both", "But the process is indentical from what I can see." The very fact that stages (a) and (b) are appealing to different bases of knowledge indicates that there are different processes in that respect. But now I think we understand each other and are not fundamentally divided on this issue.
But are they inseparable? We use stage (B) without appealing to (A) when we're dealing with natural signs, since a mere sign has no grammar. We use abduction and the like, but we don't use linguitic compositional rules to develop tentative interpretations.
Let me clarify.
The overall process is identical in that all aspects are involved in both. Yet clearly in each we can discuss elements. The danger is in taking those elements as being truly separable from the whole. That is where I disagree.
We can talk about say a zip algorithm. But to talk about it in use then suddenly we are in this larger holistic process which is essentially the hermeneutic process. The problem is that folks, in talking about uncompressing a zip file, tend to neglect the steps of identifying the file type, of checking the decompression as an intelligible file and so forth. Likewise in talking about linguistic interpretation we tend to overlook the more lawlike syntactical and other processes going on in our language comprehension.
Hope that clarifies things.
The issue of a "mere sign having no grammar" is an interesting one. I hope you'll forgive me if I perhaps dedicate a separate post to that. I'd simply say that while we might be forced to extend the sense of grammar somewhat from what its use in Junior High English class entailed, it is a fruitful one and one actually quite in keeping with the historic use of the term. (Here thinking of Scotus' grammatica speculativa)