Marco Facchin-Clerici, Ph.D.
Is Radically Enactive Imagination Really Contentless?
Single Authored
Phenomenology & the Cognitive Sciences
Originally rejecred by Phenomenology & Mind
Quick Summary:
The paper argues that basic imaginations satisfy a minimal, enactivist-friedly notion of rapresentation (which I take from Rowland's 2006 Body Language). Since representations have contents, it follows that basic imagination is not contentless - which poses a problem to radical enactivism.
Abstract:
Radical enactivists claim that cognition is split in two distinct kinds, which can be differentiated by how they relate to mental content. In their view, basic cognitive activities involve no mental content whatsoever, whereas linguistically scaffolded, non-basic, cognitive activities constitutively involve the manipulation of mental contents. Here, I evaluate how this dichotomy applies to imagination, arguing that the sensory images involved in basic acts of imaginations qualify as vehicles of content, contrary to what radical enactivists claim. To argue so, I leverage what has appropriately been dubbed a “compare to prototype” argument. Hence, I will first identify, within the enactivist literature, the general functional profile of a vehicle of content complying with the austere standard of contentfulness radical enactivists adhere to. Provided such a profile, I will show, relying on a mixture of reasoning and empirical evidence, that basic sensory images satisfy it, and thus that they can rightfully be identified as vehicles of content. This, I claim, provides a sufficient reason to identify the sensory images involved in basic acts of imagination as vehicles of content, thereby denying that basic imagination does not involve mental content
​
​
Structural Representations do not meet the Job Description Challenge
Single Authored
Synthese
Quick Summary:
I argue that at least some detectors satisfy a popular and demanding functional notion of strutural representations. Since receptors do not "pass" the job description challenge (i.e. their functional profile is not a representational functional profile), it follows that structural representations don't pass it too.
​
The paper has a "sequel": Maps, Simulations, Spaces and Dynamics (on Erkenntnis), which identifies an important flaw in my original argument.
Abstract:Structural representations are increasingly popular in philosophy of cognitive science. A key virtue they seemingly boast is that of meeting Ramsey's job description challenge. For this reason, structural representations appear tailored to play a clear representational role within cognitive architectures. Here, however, I claim that structural representations do not meet the job description challenge. This is because even our most demanding account of their functional profile is satisfied by at least some receptors, which paradigmatically fail the job description challenge. Hence, the functional profile typically associated with structural representations does not identify representational posits. After a brief introduction, I present, in the second section of the paper, the job description challenge. I clarify why receptors fail to meet it and highlight why, as a result, they should not be considered representations. In the third section I introduce what I take to be the most demanding account of structural representations at our disposal, namely Gładziejewski's account. Provided the necessary background, I turn from exposition to criticism. In the first half of the fourth section, I equate the functional profile of structural representations and receptors. To do so, I show that some receptors boast, as a matter of fact, all the functional features associated with structural representations. Since receptors function merely as causal mediators, I conclude structural representations are mere causal mediators too. In the second half of the fourth section I make this conclusion intuitive with a toy example. I then conclude the paper, anticipating some objections my argument invites.
​
​
Are Generative Models Structural Representations?
Single Authored
Minds & Machines
Quick Summary:
I argue that a popular argument to the effect that generative models are structural representations fails. This is because the argument does not show that the vehicle instantiating the model is similar in the required way to the represented target.
Abstract:
Philosophers interested in the theoretical consequences of predictive processing often assume that predictive processing is an inferentialist and representationalist theory of cognition. More specifically, they assume that predictive processing revolves around approximated Bayesian inferences drawn by inverting a generative model. Generative models, in turn, are said to be structural representations: representational vehicles that represent their targets by being structurally similar to them. Here, I challenge this assumption, claiming that, at present, it lacks an adequate justification. I examine the only argument offered to establish that generative models are structural representations, and argue that it does not substantiate the desired conclusion. Having so done, I consider a number of alternative arguments aimed at showing that the relevant structural similarity obtains, and argue that all these arguments are unconvincing for a variety of reasons. I then conclude the paper by briefly highlighting three themes that might be relevant for further investigation on the matter
​
​
The Spinal Cord as an Intrabodily Cognitive Extension
Written with: Elia Zanin & Marco Viola
Biology & Philosophy
rejected by The British Journal of Philosophy of Science
Quick Summary:
We survey the empirical literature on the Spinal Cord and its role in cognitive processing, and then make a "parity argument" to the effect that it is part of the cognitive system. Roughly: if parts of our brains were doing that, we would count them as cognitive. Then, the spinal cord is cognitive.
Abstract:
Within the field of neuroscience, it is assumed that the central nervous system is divided into two functionally distinct components: the brain, which does the cognizing, and the spinal cord, which is a conduit of information enabling the brain to do its job. We dub this the “Cinderella view” of the spinal cord. Here, we suggest it should be abandoned. Marshalling recent empirical findings, we claim that the spinal cord is best conceived as an intrabodily cognitive extension: a piece of biological circuitry that, together with the brain, constitutes our cognitive engine. To do so, after a brief introduction to the anatomy of the spinal cord, we briefly present a number of empirical studies highlighting the role played by the spinal cord in cognitive processing. Having so done, we claim that the spinal cord satisfies two popular and often endorsed criteria used to adjudicate cases of cognitive extension; namely the parity principle and the so-called “trust and glue” criteria. This, we argue, is sufficient to vindicate the role of the spinal cord as an intrabodily mental extension. We then steel our case considering a sizable number of prominent anti-extension arguments, showing that none of them poses a serious threat to our main claim. We then conclude the essay, spelling out a number of far-from trivial implications of our view.
​
​
Predictive Processing and Anti-Representationalism
Single Authored
Synthese
Quick Summary:
If you look within predictive processing systems, there seems to be nothing resembling a (structural) representation, no matter how hard you look.
Abstract:
Many philosophers claim that the neurocomputational framework of predictive processing entails a globally inferentialist and representationalist view of cognition. Here, I contend that this is not correct. I argue that, given the theoretical commitments these philosophers endorse, no structure within predictive processing systems can be rightfully identified as a representational vehicle. To do so, I first examine some of the theoretical commitments these philosophers share, and show that these commitments provide a set of necessary conditions the satisfaction of which allows us to identify representational vehicles. Having done so, I introduce a predictive processing system capable of active inference, in the form of a simple robotic “brain”. I examine it thoroughly, and show that, given the necessary conditions highlighted above, none of its components qualifies as a representational vehicle. I then consider and allay some worries my claim could raise. I consider whether the anti-representationalist verdict thus obtained could be generalized, and provide some reasons favoring a positive answer. I further consider whether my arguments here could be blocked by allowing the same representational vehicle to possess multiple contents, and whether my arguments entail some extreme form of revisionism, answering in the negative in both cases. A quick conclusion follows.
​
​
Do Markov Blankets Really Matter?
Single Authored
Review of Philosophy & Psychology
Quick Summary:
We cannot determine whether the "extended mind thesis" is true or not appealing to Markov Blankets - for a variety of reasons. One of them is that such a procedure makes the extended mind thesis definitionally false, offering an answer to our question in name only.
Abstract:
The extended mind thesis claims that a subject’s mind sometimes encompasses the environmental props the subject interacts with while solving cognitive tasks. Recently, the debate over the extended mind has been focused on Markov Blankets: the statistical boundaries separating biological systems from the environment. Here, I argue such a focus is mistaken, because Markov Blankets neither adjudicate, nor help us adjudicate, whether the extended mind thesis is true. To do so, I briefly introduce Markov Blankets and the free energy principle in Section 2. I then turn from exposition to criticism. In Section 3, I argue that using Markov Blankets to determine whether the mind extends will provide us with an answer based on circular reasoning. In Section 4, I consider whether Markov Blankets help us track the boundaries of the mind, answering in the negative. This is because resorting to Markov Blankets to track the boundaries of the mind yields extensionally inadequate conclusions which violate the parity principle. In Section 5, I further argue that Markov Blankets led us to sidestep the debate over the extended mind, as they make internalism about the mind vacuously true. A brief concluding paragraph follows.
​
​
Phenomenal Transparency, Cognitive Extensions, and Predictive Processing
Single Authored
Phenomenology & The Cognitive Sciences
Preprint|Readcube
Quick Summary:
Clark's work on predictive processing puzzling depicts mind extending resources as attended to - this goes against the idea that genuinely mind extending resources must "disappear" from our conscious apprehension when used. I argue that this contrast should be solved letting go of the second idea: genuinely mind-extending resources can be consciously apprehended when used.
Abstract:
I discuss Clark’s predictive processing/extended mind hybrid, diagnosing a problem: Clark’s hybrid suggests that, when we use them, we pay attention to mind-extending external resources. This clashes with a commonly accepted necessary condition of cognitive extension; namely, that mind-extending resources must be phenomenally transparent when used. I then propose a solution to this problem claiming that the phenomenal transparency condition should be rejected. To do so, I put forth a parity argument to the effect that phenomenal transparency cannot be a necessary condition on cognitive extension: roughly, since internal cognitive resources can fail to be transparent when used, by parity, external resources can fail to be phenomenally transparent too. Further, I argue that phenomenal transparency is not even a reliable indicator of cognitive extension; hence its absence should not be considered a problem for Clark’s extended mind-predictive processing hybrid. Lastly, I consider and allay a number of worries my proposal might raise, and conclude the paper.​​​​
​
​
Troubles with Mathematical Contents
Single Authored
Philosophical Psychology
Rejected by: Erkenntnis, Mind & Language and Philosophical Studies
Preprint
Quick Summary:
I examine Egan's deflationary account of representations, and argue that it fails to account for representations in the same way more metaphysically robust account of representations do: they all fail to account for contents in a naturalistically respectable manner. The only difference is that Egan's acccount fails to do so in regards to "mathematical", rather than "cognitive", contents.
Abstract:
To account for the explanatory role representations play in cognitive science, Egan’s deflationary account introduces a distinction between cognitive and mathematical contents. According to that account, only the latter are genuine explanatory posits of cognitive-scientific theories, as they represent the arguments and values cognitive devices need to represent to compute. Here, I argue that the deflationary account suffers from two important problems, whose roots trace back to the introduction of mathematical contents. First, I will argue that mathematical contents do not satisfy important and widely accepted desiderata all theories of content are called to satisfy, such as content determinacy and naturalism. Secondly, I will claim that there are cases in which mathematical contents cannot play the explanatory role the deflationary account claims they play, proposing an empirical counterexample. Lastly, I will conclude the paper highlighting two important implications of my arguments, concerning recent theoretical proposals to naturalize representations via physical computation, and the popular predictive processing theory of cognition​​​​
​
​
Why can't we say what cognition is
(at least for the time being)
Single Authored
Philosophy & the mind sciences
Rejected by: The British Journal of Philosophy of Science
Open Access Article
Quick Summary:
I contend that we are unable to define cognition in any way. This is because there may be no unique folk, intuitive concept of cognition, and we cannot identify any unique scientific contept of cognition due to the existence of many different, competing and equally legitimate approaches to the sciences of the mind.
Abstract:
Some philosophers search for the mark of the cognitive: a set of individually necessary and jointly sufficient conditions identifying all instances of cognition. They claim that the mark of the cognitive is needed to steer the development of cognitive science on the right path. Here, I argue that, at least at present, it cannot be provided. First (§2), I identify some of the factors motivating the search for a mark of the cognitive, each yielding a desideratum the mark is supposed to satisfy (§2.1). I then (§2.2) highlight a number of tensions in the literature on the mark of the cognitive, suggesting they’re best resolved by distinguishing two distinct programs. The first program (§3) is that of identifying a mark of the cognitive capturing our everyday notion of cognition. I argue that such a program is bound to fail for a number of reasons: it is not clear whether such an everyday notion exists; and even if it existed, it would not be able to spell out individually necessary and jointly sufficient conditions for cognition; and even if it were able to spell them out, these conditions won’t satisfy the desiderata a mark of the cognitive should satisfy. The second program is that of identifying a mark of the cognitive spelling out a genuine scientific kind. But the current state of fragmentation of cognitive science, and the fact that it is splintered in a myriad of different research traditions, prevent us from identifying such a kind. And we have no reason to think that these various research traditions will converge, allowing us to identify a single mark. Or so, at least, I will argue in (§4). I then conclude the paper (§5) deflecting an intuitive objection, and exploring some of the consequences of the thesis I have defended.​​​​
​
​
Public Charades, or
how enactivists can tell apart pretense from not-pretense
With Zuzanna Rucinska
Erkenntnis
Rejected by: Review of Philosophy and Psychology
Preprint|Readcube
Quick Summary:
Enactivists don't need to posit (representational) mental states to individuate pretense, they can do so only in reference to the pretender's behavior, if described "thickly" enough. Indeed, such an account of pretense is likely to be more satisfactory than one that mentions folk psychological mental states.
Abstract:
Enactive approaches to cognition argue that cognition, including pretense, comes about through the dynamical interaction of agent and environment. Applied to cognition, these approaches cast cognition as an activity an agent performs interacting in specific ways with her environment. This view is now under significant pressure: in a series of recent publications, Peter Langland-Hassan has proposed a number of arguments which purportedly should lead us to conclude that enactive approaches are unable to account for pretense without paying a way too severe theoretical price. In this paper, we will defend enactive approaches to pretense, arguing that they can in fact explain pretense without incurring in the negative theoretical consequences Peter Langland-Hassan fears. To this effect, we start by exposing Langland-Hassan’s challenge (§2), to then highlight its core assumptions and demonstrate their falsity (§3). Having done so, we argue that none of the theoretical consequences Langland-Hassan fears follow (§4), and in fact enactive approaches to cognition may be explanatorily superior to the one Langland-Hassan favors (§5). A brief conclusion will then follow (§6).​​​​
​
​
Extended Animal Cognition
With Giulia Leonetti
Synthese
Rejected by: Inquiry, Biology & Philosophy
Preprint|Readcube
Quick Summary:
Most treatment of the extended mind focus on human cognition. This may suggest that cognitive extensions are exclusively human. But, we argue, this isn't the case. Many human cognitive extensions have clear animal analogs. If human cognition extends, then, so too does animal cognition.
Abstract:
According to the extended cognition thesis, an agent’s cognitive system can sometimes include extracerebral components amongst its physical constituents. Here, we show that such a view of cognition has an unjustiably anthropocentric focus, for it depicts cognitive extensions as a human-only aair. In contrast, we will argue that if human cognition extends, then the cognition of many non-human animals extends too, for many non-human animals rely on the same cognition-extending strategies humans rely on. To substantiate this claim, we will proceed as follows. First (§1), we will introduce the extended cognition thesis, exposing its anthropocentric bias. Then, we will show that humans and many non-human animals rely on the same cognition-extending strategies. To do so, we will discuss a variety of case studies, including “intrabodily” cognitive extensions such as the spinal cord (§2), the widespread reliance on epistemic actions to solve cognitive tasks (§3) and cases of animal cognitive ooading (§4). We’ll then allay some worries our claim might raise (§5) to then conclude the paper (§6).​​​​
​
​
Neural Representations Unobserved
Or, a dilemma for the cognitive neuroscience revolution
Single Authored
Synthese
Quick Summary:
According to the cognitive neuroscience revolution, neural structural representations are components of neural mechanisms that we can (and in fact have) observed. I examine a variiety of neural structures and argue that none of them satisfies the relevant definition of structural representation. Hence, they have not been observed - with dire consequences for the cognitive neuroscience revolution
The paper has a "sequel" on Erkenntnis
Abstract:
Neural structural representations are cerebral map- or model-like structures that structurally resemble what they represent. These representations are absolutely central to the “cognitive neuroscience revolution”, as they are the only type of representation compatible with the revolutionaries’ mechanistic commitments. Crucially, however, these very same commitments entail that structural representations can be observed in the swirl of neuronal activity. Here, I argue that no structural representations have been observed being present in our neuronal activity, no matter the spatiotemporal scale of observation. My argument begins by introducing the “cognitive neuroscience revolution” (§1) and sketching a prominent, widely adopted account of structural representations (§2). Then, I will consult various reports that describe our neuronal activity at various spatiotemporal scales, arguing that none of them reports the presence of structural representations (§3). After having deflected certain intuitive objections to my analysis (§4), I will conclude that, in the absence of neural structural representations, representationalism and mechanism can’t go together, and so the “cognitive neuroscience revolution” is forced to abandon one of its main commitments (§5).​​​​
​
​
Structure and Function in the Predcitive brain
With Marco Viola
Biology & Philosophy. Rejected by Philosophy of Science, The British Journal of Philosophy of Science, and Minds & Machines
Quick Summary:
Predictive Processing (and the Free Energy Principle) have expanded their scope, from a "theory of cortical responses" to an ambitious framework explaining what it is to be a thing that persist in time. What do such massively expansive theories entail for our brains? Nothing good, we argue, as they end up casting brains as equipotential organs. Predictive Processing should thus limit its expanatory ambition, to stay in closer contact with neuroscience.
Abstract:
Predictive processing is an ambitious neurocomputational framework, offering an unified explanation of all cognitive processes in terms of a single computational operation, namely prediction error minimization. Whilst this ambitious unificatory claim has been thoroughly analyzed, less attention has been paid to what predictive processing entails for structure-function mappings in cognitive neuroscience. We argue that, taken at face value, predictive processing entails an all-to-one structure-function mapping, wherein each individual neural structure is assigned the same function, namely minimizing prediction error. Such a structure-function mapping, we show, is highly problematic. For, barring few, rare occasions, such a structure-function mapping fails to play the predictive, explanatory and heuristic roles structure-function mappings are expected to play in cognitive neuroscience. Worse still, it offers a picture of the brain that we know is wrong. For, it depicts the brain as an equipotential organ; an organ wherein structural differences do not correspond to any appreciable functional difference, and wherein each component can substitute for any other component without causing any loss or degradation of functionality. Somewhat ironically, the very neuroscientific roots of predictive processing motivate a form of skepticism concerning the framework’s most ambitious unificatory claims. Do these problems force us to abandon predictive processing? Not necessarily. For, once the assumption that all cognition can be accounted for exclusively in terms of prediction error minimization is relaxed, the problems we diagnosed lose their bite.​​​​
​
​
Maps, Simulations, Spaces and Dynamics
On distinguishing types of structural representations
Single Authored
Erkenntnis
Quick Summary:
​Structural representations have become central to the debate over cognitive (neuro)science. However, it is not clear whether we all conceive of them in the same way. The paper argues that we don't, as it identifies 4 dystinct senses of "structural representation" present in the literature, shows that they are distinct, and then shows how distinguishing them may be relevant to a number of debates.
Abstract:
Structural representations are likely the most talked about representational posits in the contemporary debate over cognitive representations. Indeed, the debate surrounding them is so vast virtually every claim about them has been made. Some, for instance, claimed structural representations are different from indicators. Others argued they are the same. Some claimed structural representations mesh perfectly with mechanistic explanations, others argued they can’t in principle mash. Some claimed structural representations are central to predictive processing accounts of cognition, others rebuked predictive processing networks are blissfully structural representation free. And so forth. Here, I suggest this confusing state of affairs is due to the fact that the term “structural representations” is applied to a number of distinct conceptions of representations. In this paper, I distinguish four such conceptions, argue that these four conceptions are actually distinct, and then show that such a fourfold distinction can be used to clarify some of the most pressing questions concerning structural representations and their role in cognitive theorizing, making these questions more easily answerable.
​​
​
Affective Artificial Agents as sui generis Affective Artifacts
With Giacomo Zanotti
Topoi
Quick Summary:
We analyze AI driven technologies designed to influence and regulate our affective life, claiming that they qualify as affective artifacts (according to Piredda's definition of them). We also claim that they are sui generis because they have agential properties. Basically, they are like self-moving, talking teddy bears.
Abstract:
AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive features. We argue that, unlike comparatively low-tech affective artifacts, affective artificial agents display a specific form of agency, which prevents them from being perceived by their users as extensions of their selves. In addition to this, we claim that their functioning crucially depends on the simulation of human-like emotion-driven behavior and requires a distinctive form of transparency—we call it emotional transparency—that might give rise to ethical and normative tensions.​​​​
​
​
Radically Embodied Introspection
With Zuzanna Rucinska and Thomas Fondelli
Topoi
Quick Summary:
introspection is often concevied as a "purely inner" activity, wherein thesubject temporarily takes a brak from worldly goings on to "look inside their own head", so to speak. In contrast to this picture, we argue that introspection is often something that we do "out there", engaging with the world in controlled ways - by writing into our diaries, talking to our friends and ourselves, and so on.
Abstract:
Introspection is often conceptualized as a “purely inner” activity, whereby the introspector temporarily breaks their coupling with the external world to focus on their “inner world”. We offer a substantially different picture of introspection. Inspired by radically embodied cognitive science, we argue that introspective processes delivering substantial self-knowledge consist of embodied, world-involving activities wherein the introspector remains coupled with the world in specific, controlled ways. Our argument unfolds as follows: after a brief introduction (Sect. 1), we provide a minimal account of introspection (Sect. 2) followed by a brief introduction to radically embodied views of the mind (Sect. 3). Then, in Sect. 4, we present in detail a case of radical embodied introspection (Sect. 4.1); we argue that that case is indeed a case of introspection (Sect. 4.2), and finally we defend our claim from some foreseeable objections (Sect. 4.3). In Sect. 5 we offer other examples, showing that radically embodied introspection is a widespread and varied phenomenon. Lastly, Sect. 6 concludes the paper sketching some morals to be drawn from our examples. The appendix briefly situates our view in the broader epistemological landscape.​​​
​
​
Fictional minds extend for real
Philosophical Psychology
Quick Summary:
Fictionalism about the mind takes statements concerning mental states to (roughly) be of the same kind of statements concerning Sherlok Holmes - they deal with fictions, things that do not really exist.
In the paper, I argue that one (popular) way to articulate such a view entails that some (extended) mental states really exist - and that this isn't as bad as it sounds for such theories.
​
Zoe Dryson has a different paper to a similar effect
Abstract:
Toon’s (2023) Mind as Metaphor defends a fictionalist view of propositional attitude ascriptions, according to which propositional attitudes are metaphors projecting the use of certain representational media “inside” the agent. Ascribing a belief, for example, means treating a person as if they had an “inner notebook” storing the relevant information. Such a link between propositional attitude ascriptions and our material culture puts Toon’s view in the immediate vicinity of the extended mind thesis, of which Toon tries to offer a fictionalist rendition. Yet, Toon misses one of the most important connections between his brand of fictionalism and the extended mind thesis. For, as I shall show, his fictionalism entails a specific form of the extended mind thesis. Given how Toon analyzes the literal meaning of propositional attitudes ascriptions, then some ascriptions of extended propositional attitudes are literally true, in a way that entails the existence of the relevant, extended, propositional attitude. This, I shall further argue, is actually good news for Toon’s fictionalist proposal. For, the real existence of extended propositional attitudes is not just compatible with Toon’s fictionalism, but also able to bolster its explanatory power.​​​
​
​
Is Mental Content an Illusion?
Quick Summary:
It's natural to think that our thoughts have objects: when we think (or see or imagine etc) X, there is an X that is thought of (or seen or imagined etc). I offer an argument to the effect that this is not the case. Our thoughts do not really have objects, they only seem to have them - just like the magician's assistant only seems to have been sawn in half
Abstract:
When we perceive, there is something we perceive. When we think, there is something we think of. When we dream, something is dreamt. These seem all platitudes: obvious and unproblematic truths. And, given that the things perceived, thought of, or dreamed are what philosophers call the mental contents of our states, few things should seem as obvious and unproblematic as the existence of mental contents. And yet, I shall here argue that, when closely examined, mental contents appear to be illusions. Just like the proverbial sawn-in-half assistant on the magician stage, they merely appear to exist, without actually existing. To substantiate this view I will first fix the reference of “mental content” and “illusion” in an innocent, theory neutral manner - that is, by relying on paradigmatic positive and negative examples of them (Sects. 2 and 3). I will also show that such paradigmatic examples allow us to extract a list of features that such examples allow us to identify features that paradigmatically identify contents and illusions as such, enabling us to recognize them. With these features at hand, I will thus argue that mental contents bear all the features that paradigmatically identify illusions. Otherwise put, I will argue that mental contents are anomalous, causally insulated, systematic, persistent and bound to a single mode of access in the same way paradigmatic examples of illusions are (Sect. 4). Having argued for my main claim, I will defend it from a couple of intuitive objections; namely that it is too silly to be taken seriously, given that mental content is given in a way that does not permit to seriously doubt of its existence (Sect. 5), and that my claim is incoherent (Sect. 6). I’ll show that, intuitive as they are, the objections are not compelling, and indeed they can be easily answered. Lastly, I will conclude the paper sketching the possible developments of an illusionistic view of content (Sect. 7).​​​
​
​