top of page

Is radically enactive imagination really contentless?

Phenomenology and the Cognitive Sciences, 2021.

A very early (and of lower quality) version of thi paper was rejected by Phenomenology & Mind.

In retrospect, I was really extremely lucky not to publish it there!

My first paper ever. 

Readcube: https://rdcu.be/cXIWt

Preprint: here

Radical enactivists claim that cognition is split in two distinct kinds, which can be differentiated by how they relate to mental content. In their view, basic cognitive activities involve no mental content whatsoever, whereas linguistically scaffolded, non-basic, cognitive activities constitutively involve the manipulation of mental contents. Here, I evaluate how this dichotomy applies to imagination, arguing that the sensory images involved in basic acts of imaginations qualify as vehicles of content, contrary to what radical enactivists claim. To argue so, I leverage what has appropriately been dubbed a “compare to prototype” argument. Hence, I will first identify, within the enactivist literature, the general functional profile of a vehicle of content complying with the austere standard of contentfulness radical enactivists adhere to. Provided such a profile, I will show, relying on a mixture of reasoning and empirical evidence, that basic sensory images satisfy it, and thus that they can rightfully be identified as vehicles of content. This, I claim, provides a sufficient reason to identify the sensory images involved in basic acts of imagination as vehicles of content, thereby denying that basic imagination does not involve mental content.

Structural representations are increasingly popular in philosophy of cognitive science. A key virtue they seemingly boast is that of meeting Ramsey's job description challenge. For this reason, structural representations appear tailored to play a clear representational role within cognitive architectures. Here, however, I claim that structural representations do not meet the job description challenge. This is because even our most demanding account of their functional profile is satisfied by at least some receptors, which paradigmatically fail the job description challenge. Hence, the functional profile typically associated with structural representations does not identify representational posits. After a brief introduction, I present, in the second section of the paper, the job description challenge. I clarify why receptors fail to meet it and highlight why, as a result, they should not be considered representations. In the third section I introduce what I take to be the most demanding account of structural representations at our disposal, namely GÅ‚adziejewski's account. Provided the necessary background, I turn from exposition to criticism. In the first half of the fourth section, I equate the functional profile of structural representations and receptors. To do so, I show that some receptors boast, as a matter of fact, all the functional features associated with structural representations. Since receptors function merely as causal mediators, I conclude structural representations are mere causal mediators too. In the second half of the fourth section I make this conclusion intuitive with a toy example. I then conclude the paper, anticipating some objections my argument invites.

Structural representations
do not meet the job
description challenge

Synthese, 2021

Readcube: https://rdcu.be/cXIQE

Preprint: here, or here

Important: The argument in this paper is flawed, as I conflate two (if not three) senses of "structural representations". See my  "Maps, simulations, spaces and dynamics" in Erkenntnis to know more. â€‹

Are Generative Models
Structural Representations?

Mind & Machines, 2021
Readcube: https://rdcu.be/cXIWf
Preprint: here, or here

Philosophers interested in the theoretical consequences of predictive processing often assume that predictive processing is an inferentialist and representationalist theory of cognition. More specifically, they assume that predictive processing revolves around approximated Bayesian inferences drawn by inverting a generative model. Generative models, in turn, are said to be structural representations: representational vehicles that represent their targets by being structurally similar to them. Here, I challenge this assumption, claiming that, at present, it lacks an adequate justification. I examine the only argument offered to establish that generative models are structural representations, and argue that it does not substantiate the desired conclusion. Having so done, I consider a number of alternative arguments aimed at showing that the relevant structural similarity obtains, and argue that all these arguments are unconvincing for a variety of reasons. I then conclude the paper by briefly highlighting three themes that might be relevant for further investigation on the matter.

Retiring the “Cinderella view”: the spinal cord as an
intrabodily cognitive extension

Biology & Philosophy, 2021

Rejected by The British Journal of Philosophy of Science

With: Elia Zanin & Marco Viola

Readcube: https://rdcu.be/cXISw

Preprint: here, or here

Within the field of neuroscience, it is assumed that the central nervous system is divided into two functionally distinct components: the brain, which does the cognizing, and the spinal cord, which is a conduit of information enabling the brain to do its job. We dub this the “Cinderella view” of the spinal cord. Here, we suggest it should be abandoned. Marshalling recent empirical findings, we claim that the spinal cord is best conceived as an intrabodily cognitive extension: a piece of biological circuitry that, together with the brain, constitutes our cognitive engine. To do so, after a brief introduction to the anatomy of the spinal cord, we briefly present a number of empirical studies highlighting the role played by the spinal cord in cognitive processing. Having so done, we claim that the spinal cord satisfies two popular and often endorsed criteria used to adjudicate cases of cognitive extension; namely the parity principle and the so-called “trust and glue” criteria. This, we argue, is sufficient to vindicate the role of the spinal cord as an intrabodily mental extension. We then steel our case considering a sizable number of prominent anti-extension arguments, showing that none of them poses a serious threat to our main claim. We then conclude the essay, spelling out a number of far-from trivial implications of our view.

Predictive processing and
anti-representationalism

Synthese, 2021

Rejected by Philosophy and Phenomenological Research

Readcube: https://rdcu.be/cXIUh

Preprint: here, or here

Many philosophers claim that the neurocomputational framework of predictive processing entails a globally inferentialist and representationalist view of cognition. Here, I contend that this is not correct. I argue that, given the theoretical commitments these philosophers endorse, no structure within predictive processing systems can be rightfully identified as a representational vehicle. To do so, I first examine some of the theoretical commitments these philosophers share, and show that these commitments provide a set of necessary conditions the satisfaction of which allows us to identify representational vehicles. Having done so, I introduce a predictive processing system capable of active inference, in the form of a simple robotic “brain”. I examine it thoroughly, and show that, given the necessary conditions highlighted above, none of its components qualifies as a representational vehicle. I then consider and allay some worries my claim could raise. I consider whether the anti-representationalist verdict thus obtained could be generalized, and provide some reasons favoring a positive answer. I further consider whether my arguments here could be blocked by allowing the same representational vehicle to possess multiple contents, and whether my arguments entail some extreme form of revisionism, answering in the negative in both cases. A quick conclusion follows.

The extended mind thesis claims that a subject’s mind sometimes encompasses the environmental props the subject interacts with while solving cognitive tasks. Recently, the debate over the extended mind has been focused on Markov Blankets: the statistical boundaries separating biological systems from the environment. Here, I argue such a focus is mistaken, because Markov Blankets neither adjudicate, nor help us adjudicate, whether the extended mind thesis is true. To do so, I briefly introduce Markov Blankets and the free energy principle in Section 2. I then turn from exposition to criticism. In Section 3, I argue that using Markov Blankets to determine whether the mind extends will provide us with an answer based on circular reasoning. In Section 4, I consider whether Markov Blankets help us track the boundaries of the mind, answering in the negative. This is because resorting to Markov Blankets to track the boundaries of the mind yields extensionally inadequate conclusions which violate the parity principle. In Section 5, I further argue that Markov Blankets led us to sidestep the debate over the extended mind, as they make internalism about the mind vacuously true. A brief concluding paragraph follows.

Extended Predictive Minds:
do Markov Blankets Matter?

Review of Philosophy and Psychology, 2021

Readcube: https://rdcu.be/cXIV4
Preprint: here, or here

Phenomenal transparency,
cognitive extension, and
predictive processing

Phenomenology and the Cognitive Sciences, 2022

Readcube: https://rdcu.be/cXIVh

Preprint: here

I discuss Clark’s predictive processing/extended mind hybrid, diagnosing a problem: Clark’s hybrid suggests that, when we use them, we pay attention to mind-extending external resources. This clashes with a commonly accepted necessary condition of cognitive extension; namely, that mind-extending resources must be phenomenally transparent when used. I then propose a solution to this problem claiming that the phenomenal transparency condition should be rejected. To do so, I put forth a parity argument to the effect that phenomenal transparency cannot be a necessary condition on cognitive extension: roughly, since internal cognitive resources can fail to be transparent when used, by parity, external resources can fail to be phenomenally transparent too. Further, I argue that phenomenal transparency is not even a reliable indicator of cognitive extension; hence its absence should not be considered a problem for Clark’s extended mind-predictive processing hybrid. Lastly, I consider and allay a number of worries my proposal might raise, and conclude the paper.

To account for the explanatory role representations play in cognitive science, Egan’s deflationary account introduces a distinction between cognitive and mathematical contents. According to that account, only the latter are genuine explanatory posits of cognitive-scientific theories, as they represent the arguments and values cognitive devices need to represent to compute. Here, I argue that the deflationary account suffers from two important problems, whose roots trace back to the introduction of mathematical contents. First, I will argue that mathematical contents do not satisfy important and widely accepted desiderata all theories of content are called to satisfy, such as content determinacy and naturalism. Secondly, I will claim that there are cases in which mathematical contents cannot play the explanatory role the deflationary account claims they play, proposing an empirical counterexample. Lastly, I will conclude the paper highlighting two important implications of my arguments, concerning recent theoretical proposals to naturalize representations via physical computation, and the popular predictive processing theory of cognition.

Troubles with
mathematical contents

Philosophical Psychology, 2022

Rejected by Erkenntnis, Mind & Language and Philosophical Studies

No readcube :(

Preprint: here, or here

Why can’t we say what
cognition is (at least for
the time being)

Philosophy and The Mind Sciences, 2023

Rejected by The British Journal of Philosophy of Science

Free Open Access article here

Some philosophers search for the mark of the cognitive: a set of individually necessary and jointly sufficient conditions identifying all instances of cognition. They claim that the mark of the cognitive is needed to steer the development of cognitive science on the right path. Here, I argue that, at least at present, it cannot be provided. First (§2), I identify some of the factors motivating the search for a mark of the cognitive, each yielding a desideratum the mark is supposed to satisfy (§2.1). I then (§2.2) highlight a number of tensions in the literature on the mark of the cognitive, suggesting they’re best resolved by distinguishing two distinct programs. The first program (§3) is that of identifying a mark of the cognitive capturing our everyday notion of cognition. I argue that such a program is bound to fail for a number of reasons: it is not clear whether such an everyday notion exists; and even if it existed, it would not be able to spell out individually necessary and jointly sufficient conditions for cognition; and even if it were able to spell them out, these conditions won’t satisfy the desiderata a mark of the cognitive should satisfy. The second program is that of identifying a mark of the cognitive spelling out a genuine scientific kind. But the current state of fragmentation of cognitive science, and the fact that it is splintered in a myriad of different research traditions, prevent us from identifying such a kind. And we have no reason to think that these various research traditions will converge, allowing us to identify a single mark. Or so, at least, I will argue in (§4). I then conclude the paper (§5) deflecting an intuitive objection, and exploring some of the consequences of the thesis I have defended.

Enactive approaches to cognition argue that cognition comes about through the dynamical interaction of agent and environment. Applied to cognition, these approaches cast cognition as an activity an agent performs interacting in specific ways with her environment. This view of cognition is now under significant pressure: in a series of recent publications, Peter Langland-Hassan has proposed a number of arguments which purportedly should lead us to conclude that enactive approaches are unable to account for pretense without paying a way too severe theoretical price. In this paper, we will defend enactive approaches to pretense, arguing that they can in fact explain pretense without incurring in the negative theoretical consequences Peter Langland-Hassan fears. To this effect, we start by exposing Langland-Hassan’s challenge (§2), to then highlight its core assumptions and demonstrate their falsity (§3). Having done so, we argue that none of the theoretical consequences Langland-Hassan fears follow (§4), and in fact enactive approaches to cognition may be explanatory superior to the one Langland-Hassan favors (§5). A brief conclusion will then follow (§6).

Public Charades, or how
the enactivist can tell apart
pretense from nonpretense

Erkenntnis
Rejected by Inquiry and Review of Philosophy and Psychology

With Zuzanna Rucinska
Preprint

Readcube 

Extended Animal Cognition

Forthcoming in Synthese (topical collection on extended mind)
Rejected by Biology and Philosophy and Inquiry

With Giulia Leonetti 
Preprint: here

According to the extended cognition thesis, an agent’s cognitive system can sometimes include extracerebral components amongst its physical constituents. Here, we show that such a view of cognition has an unjustifiably anthropocentric focus, for it depicts cognitive extensions as a human-only affair. In contrast, we will argue that if human cognition extends, then the cognition of many non-human animals extends too, for many non-human animals rely on the same cognition-extending strategies humans rely on. To substantiate this claim, we will proceed as follows. First (§1), we will introduce the extended cognition thesis, exposing its anthropocentric bias. Then, we will show that humans and many non-human animals rely on the same cognition-extending strategies. To do so, we will discuss a variety of case studies, including “intrabodily” cognitive extensions such as the spinal cord (§2), the widespread reliance on epistemic actions to solve cognitive tasks (§3) and cases of animal cognitive offloading (§4). We’ll then allay some worries our claim might raise (§5) to then conclude the paper (§6).

Neural structural representations are cerebral map- or model-like structures that structurally resemble what they represent. These representations are absolutely central to the “cognitive neuroscience revolution”, as they are the only type of representation compatible with the revolutionaries’ mechanistic commitments. Crucially, however, these very same commitments entail that structural representations can be observed in the swirl of neuronal activity. Here, I argue that no structural representations have been observed being present in our neuronal activity, no matter the spatiotemporal scale of observation. My argument begins by introducing the “cognitive neuroscience revolution” (§1) and sketching a prominent, widely adopted account of structural representations (§2). Then, I will consult various reports that describe our neuronal activity at various spatiotemporal scales, arguing that none of them reports the presence of structural representations (§3). After having deflected certain intuitive objections to my analysis (§4), I will conclude that, in the absence of neural structural representations, representationalism and mechanism can’t go together, and so the “cognitive neuroscience revolution” is forced to abandon one of its commitments (§5).

Neural Representations
Unobserved

Published in Synthese

Readcube

Preprint: here

Important: The paper mentioned in the appendix (fn 40) is now out on Erkenntnis. Click here to jump at it 

Structure and Function in the predictive brain

Rejected by Philosophy of Science

Currently re-writing it

Co-authored with Marco Viola 

preprint (rejected version)

Hierarchical predictive coding - also known as “predictive processing” - claims that all neural structures perform a single computational function; namely, that of minimizing an intracerebral neural signal known as prediction error. How does this doctrine measure up against the desiderata that structure-function mappings are supposed to meet in cognitive neuroscience? To answer this question, we first clarify the roles structure-function mappings are supposed to play (§2) and then introduce hierarchical predictive coding, showing that it entails a very peculiar all-to-one structure-function mapping (§3). We then argue that such a mapping is extremely problematic (§4), as it seems unsuited to play the epistemic roles structure-function mappings are supposed to play in cognitive neuroscience. Further, it depicts the brain as an equipotential organ, which most definitely isn’t. Should hierarchical predictive coding be abandoned, then? Not necessarily. As we will show, there is a way to rely on computational indeterminacy so as to say that different neural areas play different cognitive functions without contradicting the claim that all neural structures have the function of minimizing prediction error (§5). Having shown defenders of hierarchical predictive coding a way out of their predicaments, we will conclude our analysis gauging some of its implications (§6).

AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive features. We argue that, unlike comparatively low-tech affective artifacts, affective artificial agents display a specific form of agency, which prevents them from being perceived by their users as extensions of their selves. In addition to this, we claim that their functioning crucially depends on the simulation of human-like emotion-driven behavior and requires a distinctive form of transparency – we call it emotional transparency – that might give rise to ethical and normative tensions.

Affective artificial agents as sui generis affective artifacts

Co-authored with Giacomo Zanotti

Topoi

Readcube

Radically embodied introspection

Submitted to Philosophical Studies

Co-authored with Zuzanna Rucinska & Thomas Fondelli

preprint

​Introspection is often intuitively conceptualized as a “purely inner” activity, whereby the introspector temporarily breaks their coupling with the external world to focus on their “inner environment”. In this paper, we put forth a substantially different picture of introspection. Inspired by radically embodied cognitive science, we argue that introspective processes delivering substantial self - knowledge consist of embodied, world - involving activities wherein the introspector remains coupled with the world in specific, controlled ways. Our argument unfolds as follows: after a brief introduction, we provide a minimal - and hopefully uncontroversial - account of introspection (§2) followed by sbrief introduction to radically embodied views of the mind (§3). Then, in (§4), we will present in detail a case of radical embodied introspection(§4.1), argue that that case is indeed a case of introspection (§4.2), and defend the claim from some foreseeable objections (§4.3) . In (§5) we will provide further cases of r adically embodied introspection, showing that it is in fact a widespread and varied phenomenon. Lastly, (§6) concludes the paper sketching some morals to be drawn from our examples.

​Radically embodied approaches to cognition have been calling for a form of global eliminativism about cognitive representations: on these views, representations are in no case involved in the execution or explanation of cognitive processes. This globally eliminative view, however, is supposedly kept at bay by the “representation-hungry challenge”. Roughly, the challenge grants that whilst a local form of eliminativism relative to sensorimotor coordinations can be established, we lack any compelling justification to endorse any global form of eliminativism about cognitive representations. In this paper, I won’t respond to the challenge. Rather, I will dissolve it, showing two things. First, that the representation-hungry challenge lacks rational motivation. This is because the line of reasoning supporting the challenge leads to some false conclusions. As such, it should be rejected - but rejecting it means rejecting the rational motivation supporting the challenge. Secondly, the challenge cannot be coherently asserted by the cognitivist/representationalist. This is because, when leveling the representation-hungry challenge, the cognitivist/representationalist allow for a form of local eliminativism limited to sensorimotor coordinations. However, given the theoretical commitments of the cognitivist/representationalist, sensorimotor coordinations are representation-hungry cognitive phenomena. Thus the representationalist cannot coherently assert that the latter form a barrier beyond which representations are safe from elimination.

Dissolving representation-hunger

Rejected by The Australasian Journal of Philosophy
Ergo
The British Journal of Philosophy of Science

In review at Philosophical Studies 

Preprint: here

Maps, Simulations, Spaces and Dynamics: On Distinguishing Types of Structural Representations

Published in Erkenntnis

Readcube

Important: This paper identifies and corrects one error in my older "structural representations do not meet the job description challenge"

Structural representations are likely the most talked about representational posits in the contemporary debate over cognitive representations. Indeed, the debate surrounding them is so vast virtually every claim about them has been made. Some, for instance, claimed structural representations are different from indicators. Others argued they are the same. Some claimed structural representations mesh perfectly with mechanistic explanations, others argued they can’t in principle mash. Some claimed structural representations are central to predictive processing accounts of cognition, others rebuked predictive processing networks are blissfully structural representation free. And so forth. Here, I suggest this confusing state of affairs is due to the fact that the term “structural representations” is applied to a number of distinct conceptions of representations. In this paper, I distinguish four such conceptions, argue that these four conceptions are actually distinct, and then show that such a fourfold distinction can be used to clarify some of the most pressing questions concerning structural representations and their role in cognitive theorizing, making these questions more easily answerable.

bottom of page