What the foregoing accounts show is that for any set of simple perceptual features, there are different perceptual objects available to access from distinct attentional perspectives. For simple tasks like differentiating dots in a lattice, shifting the contrast of overlapping circles, or switching the view of bistable images, any perceiver with normally functioning perceptual capacities should be able to perform the requisite manipulation to find the desired perspective. Anyone can be directed to ‘see the Necker cube from a top/bottom view’ then shift their perspective to the alternative view; and, anyone can select two from a set of six dots as the focus of their attention to find in them a unique perceptual object. There are, however, some perspectives (like those of the perceptual expert) that are unavailable to every perceiver with normally functioning capacities – perspectives within which unique perceptual objects can be perceived. Not everyone can identify the sufficient set of notes constitutive of the Bach motif apart from the rest of the concerto as its own, individual object of perception. The perspective from which it would be possible to individuate the motif’s notes as a distinct perceptual object is one that requires training and a specific coordination of one’s perceptual capacities. Accordingly, we shall call this perceiver-specific coordination of perceptual capacities the “perceptual frame.” We define perceptual frames as follows:
Perceptual FrameA perceptual frame is an adaptation of the perceptual system caused by perceptual learning and realized through bottom-up functional processes such that sensory information is organized in a subject-dependent way leading to idiosyncratic perceptual object representations.
Given the foregoing definition, a perceptual frame is a kind of perspective like those discussed above with regard to top-down shifts of attention. Yet, not all perspectives require the focus of attention, nor a perceiver’s interests, nor occurrent top-down influences in order to function. There are also default perspectives which live in the background and are simply the adaptively conditioned ways that we perceive in a given environment. These default perspectives function as a heuristic for perceptual processing and can be trained, developed, or adapted to prefer the selection and representation of objectsFootnote 3 in a sensory environment. Moreover, there is no such thing as a ubiquitous perspective shared by all perceivers – everyone is a little (or a lot) different.
For example, perceptual experts experience unique affordances and search strategies for their domain of expertise (Kundel et al. 2007; Gegenfurtner et al. 2011; Sheridan and Reingold 2017; Ivy et al. 2021; Ivy 2023). The ability to see more completely and more efficiently than novices is a hallmark of perceptual training and expertise. One could account for this kind of effect by invoking the concept of “mental imagery”. Mental imagery is a form of perceptual completion that does not depend on sensory stimulation, but rather familiarity with what perceptual objects should be (Nanay 2010, 2023). For instance, knowing which stars make up a constellation, one can apply a mental image of that constellation to see those stars as an object apart from the rest of the night sky (Briscoe 2011). The application of mental imagery to a sensory scene can be automatic and also subject-dependent. Two perceivers may know different constellations and so see different combinations in the night sky. Thus, expert perceivers may simply apply a more rigorous or complete mental image to better perceive the scenes that they are familiar with. To the contrary, there is empirical evidence that the subject-dependent effects on expert object perception come prior to the application of mental images to familiar scenes.
Eye-tracking studies have shown that expert-specific search strategies are applied (though not effectively) to search tasks outside of the experts’ domain; non-experts do not exhibit the same distinctive search patterns, nor do they search as accurately or efficiently as experts in domain-oriented images (Ivy et al. 2023). Therefore, we may interpret these findings as demonstrating that perceptual experts have a default mode of perceiving applied to both their domain of expertise and outside of it. Further, the default mode of perceiving is specific to each type of expert inasmuch as different domains of expertise require different search strategies for success. The way that an expert moves their eyes has been deliberately trained and developed through time to respond to the perceptual objects within their domain. Although these defaulted strategies bleed over into other tasks, it is only for an expert’s own domain that their perceptual frame helps to organize the sensory scene accurately and efficiently.
If it were the case that perceptual experts applied mental images to efficiently search and represent sensory scenes, we would expect them to only apply those mental images where they are applicable – within their domains of expertise. You don’t see constellations where there are no stars. However, given that experts employed the same search strategies both in and outside of their domains of expertise, this seems not to be the case. The way that experts are trained to look, they indiscriminately apply while searching any scene (but are only better off for it when searching within their domain). This does not mean that, for example, radiologists look for tumors in the cars lining the highway on their daily commute. Rather, it means that the unique saccadic pattern that they have developed to expertly find tumors in radiographs are vestigial behaviors present in the way that they gaze at the highway while looking for opportunities to pass (among other search tasks). Put into the terms of the present paper’s thesis, the perceptual frame is the search pattern that operates in a bottom-up manner indiscriminate of the target task or goal; however, when the target task or scene match the perceptually framed search pattern, tumors pop out for the radiologist as they do not for those with untrained eyes.
Perhaps there is still something to be said for mental imagery regarding the functions of expert perception, but the foregoing evidence suggests that there are other pre-perceptual, (and thus, pre-imagistic) subject-dependent forces at play; i.e., perceptual frames. Expert perceptual capacities are trained and developed, and thus causally connected to rich cognitive apparatus aimed at adapting to the targets and goals of search. Yet, like a motorbike whose engine and chassis have been built for the racetrack, the bike can still operate inefficiently on the street even if it was not designed for that purpose. Similarly, perceptually framed adaptations within the perceptual system will prefer the sensory environments that they are attuned to regardless of attention or mental imagery. Or in other words, the specific way that the motorbike is designed for the racetrack (the perceptual frame) does not bear upon how the bike is being ridden (the application of mental imagery) when considering what kinds of maneuvers the bike is able to perform.
This is what makes a perceptual frame a unique kind of perspective. How perceptual information is received and arranged can be trained and developed. However, top-down influences are not required at the time of perceiving in order for sensory information to be organized by the defaulted perceptual frame. Acute, expert forms of perception can function automatically; and insofar as they are default heuristics of the perceptual system, they apply to scenes ubiquitously. In this manner of speaking, the perceptual frame operates as a kind of attunement to particular kinds of sensory information as well as to a specific directive for sensory information to be organized by the perceptual system. This is what enables radiologists to accurately diagnose images flashed before them at incredibly fast speeds up to 200 ms/image (Drew et al. 2013). The radiologist’s perceptual frame is attuned to automatically favor particular sensory inputs associated with what they know aberrancies to typically look like, and where they are most likely to be found (Haider and Frensch 1999; Brams et al. 2019). However, there is an important question yet to be answered: can the organization and sensitivity to sensory information afforded by a perceptual frame be sufficient for the individuation of perceptual objects which would otherwise be inaccessible without that frame?
To answer yes to this question amounts to saying that it is possible for any two perceivers with different perceptual frames to have distinctly different perceptual experiences given the same sensory input. Moreover, if it is the case that from the same set of sensory information two perceivers may have the ability to individuate unique perceptual objects inaccessible to the other perceiver, then the effect of perceptual frames on perceptual processing is both deep and pervasive. Where the structure of perceptual processing and experience is contingent upon a perceptual frame, that frame operates akin to the structural principles of perceptual objecthood discussed in section one. Without the appropriate frame, possible perceptual objects could not be individuated or identified. Accordingly, in the three subsections to follow, we survey a series of empirical data from perceptual processing in multisensory, amodal, and predictive settings to support the foregoing suppositions. We conclude on the basis of this evidence that perceptual frames often play a necessary role for perceptual objecthood.
i. Bayesian processing and multisensory integration
There is a conceptual problem known by many names in the philosophy and cognitive science of perception: the binding problem (Treisman 1998), the many properties problem (Jackson 1977; Clark 2000), the many-many problemFootnote 4 (Wu 2014), the ambiguity of sensory combination (Ernst and Bülthoff 2004), etc. The problem is that in any sensory environment there are infinitely many possible combinations of sensory features. The question is then raised - how are our perceptual capacities able to discriminate, integrate, and circumscribe multitudes of sensory data into a coherent and veridical representation of the environment? Given what we have reported above, a partial answer to this question can be offered in the form of the structural principles of perceptual objecthood. The structural principles of perceptual objecthood sufficiently explain the circumscription and integration of basic sensory features into simple perceptual objects. However, as the sensory environment becomes either more complex or more ambiguous, the structural principles cease to sufficiently explain feature integration. This is why, as we argued above, perceptual frames make a significant impact – especially for trained and expert perceivers.
The problem becomes much more of a challenge when we take into account the fact that the sensory environment rarely, if ever, is entirely unisensory. Just about every perceptual scene that we encounter affords sensory information about the same objects processed by different sense modalities. We can hear a guitar play just as well as we can feel its strings when plucked, or see our fingers set against its frets. Given that sensory environments contain multiple sources of sensory information about the same perceptual objects, many of those information streams are redundant upon one another. Yet, despite complex sensory noise, perceptual ambiguity, and multisensory redundancy, the perceptual system is quite adept at representing clear, integrated, and coherently structured environments. As we contend in what follows, this is possible because perceptual frames often help to make these adjudications. This fact is exemplified by an influential account of Bayesian perceptual processing called the Maximum Likelihood Estimation (MLE).
MLE holds that redundant streams of perceptual information are integrated into a single multisensory representation if and only if doing so optimizes veridical representations of perceptual objects. If it is the case that integrating a guitar’s auditory properties with its visual and tactile properties will afford an optimal representation (most veridical) of that guitar, then the perceptual system will do so. However, where redundant sensory data may be noisy or imperfect, MLE predicts that the perceptual system will rely upon other more valid sensory data to individuate perceptual objects. Significantly, the success of such calculations requires a sensitivity to what sensory information counts as noise, what counts as valid, and what sorts of combinations will yield optimal representations. Given that this sensitivity affects bottom-up sensory processing, is trainable and makes a significant difference in the representation of perceptual objects, it is a strong candidate for a perceptual frame.
For example, in one experiment (Ernst and Banks 2002), subjects were asked to determine the height of a bar that could be both felt and seen. When the visual information was muddied with extra visual noise, the subjects relied on touch to make their judgments; the opposite effect was found when tactile inputs were made noisier than the visual inputs. Further, by varying the degree of noise across either visual or tactile inputs, the researchers developed a Bayesian model that is able to predict to what degree any stream of sensory information will be utilized by the perceptual system when individuating objects. According to the model, if two streams of sensory data are redundant and equally reliable (e.g., you can both clearly see and feel an object), both sets of data will be used equally to make estimations about the object. However, when noise is added to a redundant sensory input, the perceptual system weights that noise against the input’s reliability for constructing an optimal perceptual representation. Thus, the perceptual system will rely more upon the clear sensory data and less upon the noisy sensory data inversely proportional to the amount of noise present (see Fig. 2 presenting the optimizing functions of the perceptual system as demonstrated in a pair of studies by Kirsch and Kunde 2023).
Fig. 2In the first task on the left, participants judged the distance that a target moves on a screen by tracing a stylus below an occluder. In the second task on the right, participants place marks on either side of an object to enclose it. In both tasks visual and tactile noise were introduced to demonstrate how the participants’ perceptual systems responded to bias by preferring the alternative, optimal stream of sensory information. The middle graphic demonstrates this optimizing strategy in different instantiations of intersensory conflict. From Kirsch and Kunde (2023), used under Creative Commons CC BY license
Thus, MLE’s Bayesian model predicts how the perceptual system utilizes sensory information as a function of that information’s probability to produce optimally accurate perceptual object representations. This explains, in part, how the perceptual system discriminates between different streams of sensory information to integrate, bind, and individuate perceptual objects with accurate reliability. Thus, at this point, we have the data we need to begin to make a case for perceptual frames on the basis of multisensory feature integration. Inasmuch as the perceptual system reliably assigns values of probability to multiple streams of sensory information in order to construct an optimal estimation of the perceptual scene, this is the beginning of a perceptual frame. The sensory information that is counted as clear is integrated, and the perceptual information that is counted as noisy is left out. Further, the principle by which the perceptual system is ready to assign these probabilities is latently active, ready to be applied in the act of perceiving.
Significantly, the original and basic formulations of MLE’s Bayesian function made consistent predictions across study participants on the assumption that normalized assignments of probability are baked into normally functioning perceptual capacities. That is to say, the original calculations of MLE do not account for differences in the assignment of probability, or, in other words – a difference of perceptual frame. If the basic probability schema counts as a perceptual frame, it is a frame shared by the vast majority of perceivers. Nevertheless, MLE’s model can account for unique perceptual frames bespoke to perceivers whose perceptual systems integrate multisensory information differently. Recent developments in MLE that have focused on “bias” and “prior knowledge” do just this (Helbig and Ernst 2007; Ernst 2012; Mandrigin 2018). These developments indicate that where there is “prior knowledge,” there is a perceptual frame at work.
It is not uncommon for a perceptual system to be biased by producing non-veridical representations of the world. For instance, without glasses, one’s eyes may represent the world as blurrier than with glasses, and similarly – tinnitus can cause one to hear ringing where none exists. Extreme cases of perceptual bias have long been an area of interest for researchers who have developed studies that manipulate vision by distorting the world 180° vertically (Stratton 1896, 1897; Helmholtz, 1924), or some small degree to the left or right (Held 1961; Held and Bossom 1961). What the data of these studies show is that over time, the perceptual system will re-tool itself to adjust to the biased shift of sensory presentation. Whereas one might overreach for an object due to their vision being off by 35°, after training, coordination eventually returns. The perceptual system is capable of learning the difference between what is veridical and what is the bias through which it represents the world. This difference is prior knowledge, and likewise an example of a perceptual frame.
In cases where perceptual information is potentially biased, “this bias may be unstable, for example because of fast adaptation processes that constantly react to small discrepancies […] The brain could learn this bias uncertainty and use this knowledge to emphasize the more stable estimates” (Ernst and Bülthoff 2004; pg. 168). In other words, the brain and perceptual system can learn to discriminate perceptual information that has been biased by noise from optimally veridical representations. On the basis of this learned perceptual information, bias can be taken into account when calculating the maximum likelihood of an object’s veridicality (Helbig and Ernst 2007; Ernst 2012). Significantly, because the processes and goals of perceptual learning can differ between observers, so too can the consequent calculations that are made to integrate reliable multisensory cues into perceptual objects. Accordingly, the perceptual systems of different perceivers who bias perceptual information differently may, in fact, produce unique perceptual objects given the same sensory inputs. Insofar as a perceiver’s prior knowledge is uniquely attuned to their own history of perceptual learning and overcoming of perceptual bias, the basic functions of their multisensory integration will differ from those with a different history. Thus, the effects of perceptual frames are demonstrated by the perceptual system’s learned response to bias.
For instance, in studies that introduce bias by shifting visual perception askew, adaptation effects are the measured results of subjects who remove the vision-shifting glasses and perceive the ordinary world as if it were still offset (e.g., the world is no longer skewed 15 degrees to the right, but subjects still mis-reach for objects). Significantly, only the study participants who actively learned to adapt to the skewed vision have adverse adaptation effects after taking off the world-shifting glasses (See Bermejo et al. 2020for a review). Subjects that did not interact with the surroundings during the time of wearing the glasses do not have adaptation effects. That is, participants who controlled their movement actively adapted to the skewed perceptual world by implementing a bias-responsive perceptual frame which persisted even after taking the glasses off. In contrast, participants who did not interact with the environment did not adapt their perceptual frame to the new visual information. Accordingly, when they removed the glasses, they slipped right back into their normally-tracking perceptual frame. Therefore, these studies are evidence that perceptual frames are learned adaptations of the perceptual system.Footnote 5 Although the frames are constructed with the aid of top-down influence (i.e., subjects needed to be in control of their movements in order to adapt), their operations occur bottom-up (i.e., adaptation effects occurred only for the group that constructed the bias-responsive perceptual frame after taking off the glasses).
To bring it all together, MLE is a Bayesian model that predicts whether features from multiple sense modalities are integrated into a single multisensory object, and if so – how. The principal calculation predicted by MLE is a function of the perceptual system to seek and to organize sensory environments such that they are optimal: maximally veridical and minimally variant, noisy, or biased. Prior knowledge can influence how the perceptual system deals with variance and bias. In such cases, the weight that prior knowledge plays in a perceptual system’s determination of maximal invariance is what we have called the perceptual frame. Hence, the perceptual frame determines whether and how multisensory perceptual objects are individuated within a perceiver’s sensory environment. Insofar as prior knowledge can shift the probability of veridicality towards familiar forms of organization, perceptual frames modulate what will count as optimal representations of the sensory environment. Consequently, for perceivers who have bias and learned different strategies to address their bias, their perceptual frames may be so bespoke as to yield the perception of very different perceptual objects even given the same set of sensory information.
ii. Amodal completion and mental imagery
The purpose of this section of the paper has been to demonstrate that perceptual frames are a necessary component of perceptual object individuation and, further, how different perceptual frames can yield different perceptual objects given the same sensory input. Above, we have shown how the perceptual system employs principles of optimality to organize multisensory environments. These principles, insofar as they are learned and implemented to address unique cases of perceptual bias, are one example of perceptual framing. In what follows, we shift gears from multisensory perceptual organization to amodal completion, mental imagery, and predictive anticipation in order to demonstrate that perceptual framing also plays a necessary role in the possibility of perceptual object individuation.
Amodal completion is the early perceptual processing and representation of an object without requiring sensory stimulation of some of that object’s parts (Nanay 2018; Thielen et al. 2019). It can occur under a number of different circumstances, but for the purposes of the discussion here we shall focus on this simple definition. The first example of amodal completion included below is a circle occluded by a square. Although the circle appears behind the square and we do not perceive 25% of it, by amodal completion, we nevertheless represent that there is a circle behind a square. The second example below is known as the Kanizsa triangle which is an illusion presenting the completion of an upside-down triangle within the negative space of an incomplete, right-side-up triangle. In both cases, the perceptual system completes features that are not present insofar as we represent the complete circle or triangles in the absence of sensory stimulation. Moreover, we may differentiate top-down amodal completion (e.g., object knowledge and expectations shape the completion) from bottom-up amodal completion (e.g., completion of shapes on the basis of Gestalt principles). For our purposes in illuminating the role of perceptual frames in object perception, we are interested in the top-down influenced sort of amodal completion.
Fig. 3These are two examples of shape completion
What is important about both of the examples in Fig. 3 is that the perceptual system has the ability to represent features despite a lack of sensory stimulation. Further, these representations are distinctly perceptual (as opposed to cognitive) insofar as amodal completion is performed in the early stages of perceptual processing through the primary visual cortex (Ban et al. 2013; Bushnell et al. 2011; Emmanouil and Ro 2014; Hazenberg et al. 2014; Lee et al. 2012; Pan et al. 2012; Scherzer and Ekroll 2015). Initial brain responses to amodal completion emerge 140ms after stimulus presentation (Guttman and Kellman 2004), followed by differential responses after 240ms (Murray et al. 2004). In both cases, despite fragmentary inputs, we have shape completion processed in the primary visual cortex (V1), which can be explained by intermediate representations of contour interpolation (Kellman and Fuchser 2023). Importantly, both of the foregoing examples of amodal completion are ubiquitous among perceivers with normally functioning perceptual capacities. However, amodal completion can occur differently for different perceivers when sensory scenes either grow increasingly complex or when there is a dearth of sensory information. Further, these differences can be manipulated in empirical settings demonstrating the effects of perceptual frames on amodal completion.
This shall serve as the basis of our argument as follows: (1) when amodal completion occurs, the ‘missing pieces’ of a completed object are represented. However, (2) if there were no reason to represent the parts of the object that are missing, then those missing pieces would not be perceived. So, for example, if you are familiar with what guitars look like and you see a part of one behind an occluding surface, you may amodally complete a representation of the guitar’s parts which are not visible to you. However, if you had never seen a guitar before, then there would be no principled basis by which you or your perceptual system might be able to represent whatever mysteries lie on the other side of the occlusion. Accordingly (3), by contraposition, missing pieces of perceptual objects are perceived when there is a reason to represent them. Therefore, if amodal completion occurs, then there is a reason to represent the missing pieces of occluded or non-fully presented perceptual objects. What, then, stands as the ‘reason’ that explains the representation of amodally completed objects?
Nanay (2010; 2018; 2023) has argued that mental imagery is the foregoing reason for the representation of amodally completed objects. According to Nanay, the imposition of a mental image on a scene completes what is occluded by virtue of how we expect the rest of the occluded object to appear (in the case of vision, at least). As in the example of a partially occluded guitar, knowing what a guitar looks like (having the mental image of a guitar) enables our perceptual system to amodally complete where the object is occluded. Yet, could perceptual frames also help to explain the possibility of amodal completion? We have argued that perceptual frames are distinct from mental imagery insofar as frames are adaptations of the perceptual system that prefer to represent and organize sensory information in particular ways. Whereas mental imagery involves a particular instantiation of perceptual representation not triggered directly by sensory input (Pearson et al. 2015; Dijkstra et al. 2019), the effects of framing are matters of the perceptual system’s built-in preferences to appropriate sensory bias and learning for the sake of the veridical organization of sensory data.
Along similar lines, Briscoe (2011) has argued that amodal completion is not a monolithic perceptual phenomenon for which mental imagery is universally sufficient. As he demonstrates, there are cases in which amodal completion can be influenced by top-down processes, yet occurs in a bottom-up manner similar to the effects of perceptual framing – and, thus, without the need for mental imagery. However, Nanay (2023) contends with Briscoe that the application of mental imagery can be unconscious and bottom-up. His evidence comes from two cases – both of which we shall show are better explained by perceptual framing than by unconscious mental imagery. We then argue that where mental imagery is necessary for completion, it relies on the effects of perceptual frames to represent veridically. We conclude alongside Briscoe that not all cases of completion depend upon imagery and further posit that some of those instances are best explained by perceptual frames. We also conclude alongside Nanay that mental imagery is a powerful tool in completion made all the more useful because of the effect that frames have for affording ecologically valid sensory information to processes of mental imagery.
Nanay argues that unconscious mental imagery explains how in cases of aphantasia, a subject can be unable to mentally imagine a shape, but nevertheless be able to discriminate whether random targets on a coordinate grid would fall within the boundaries of that shape were it to be visibly present (Jacobs et al. 2017). Similarly, regardless of whether subjects were asked to imagine or to repress imagining a red apple, Kwok et al. (2019) found that the subjects were equally primed in either condition to perceptually prefer red cues over alternatives. In both cases, Nanay admits that neither set of subjects could have imposed a top-down mental image in order to excite the priming effects that the researchers found. However, because there were consequent priming effects, Nanay concludes that the subjects employed unconscious, bottom-up mental imagery. To the contrary, neither of the foregoing cases that Nanay introduces as evidence for a bottom-up and unconscious application of mental imagery necessitates such a complex and representationally rich architecture.
Just because an aphantasic cannot mentally imagine a shape does not mean that they are unable to recognize shapes when they see them; nor does it mean that their perceptual systems are unable to collect, organize, and categorize sensory information as optimal, veridically represented objects. Underlying structural principles of perceptual objecthood guide how the perceptual system integrates and organizes sensory information into objects like shapes. Accordingly, the structural principles that organize both normally functioning and aphantasic perceptual experience need not rely on the rich content of imagery to function. Consequently, a perceptual frame may be set such that the aphantasic’s perceptual system is predictively readied to organize sensory information in order to represent the shape’s distinctive features if they were to appear. Given this framed readiness, targets that appear within the boundaries of the would-be shape do not require a mental image of the shape to be categorized as consistent or inconsistent with it. A similar case can be made for the ‘imagine a red apple’ study. Rather than an unconscious mental image of an apple causing the measured effects, the perceptual system itself may alternatively be framed to seek and represent individual features associated with red apples. Thus, red cues are preferred not because they are congruent with the mental image of an apple, but rather because the perceptual mechanisms that prefer redness and roundness and shininess are set to high alert by the associated perceptual frame. The mental image of an apple – unconscious or otherwise – is perhaps sufficient, but not necessary for the perceptual system to seek out and organizationally prefer red, round, and shiny properties in the sensory environment.
For this reason, the invocation of mental imagery to explain priming effects and the amodal completion of perceptual objects is an important tool: where mental imagery is utilized, completion can occur. Yet, not all instances of completion require rich mental imagery to operate. In such cases, how the perceptual system is framed can explain why features are primed and organized as they are. Moreover, the framing of the perceptual system to prime and organize sensory environments seems an important pre-requisite of mental imagery in amodal completion. One would not have the opportunity to appropriately apply a mental image were the requisite sensory information ignored by the perceptual system. Mental imagery aids in veridical perception only when images fit the scenes that they aim to complete. Barring hallucination, one cannot veridically apply their mental imagery of northern hemisphere constellations to the southern hemisphere’s night sky. So, it would seem that mental imagery is an incomplete explanation for amodal completion. Where it is necessary, one must not only have a mental image available to apply to a scene, but the scene itself must present ecologically valid sensory information for that image to fit. This is why the effects of perceptual frames may serve as an elementary component of amodal completion in addition to mental imagery. Where mental imagery is necessary for amodal completion, perceptual frames are necessary for imagery. Frames guide the pre-requisite functions of the perceptual system to organize sensory information in order that mental images may optimally apply to occluded portions of perceptual objects.
The foregoing arguments implicate that whenever amodal completion occurs, there is a perceptual frame at work enabling non-occluded sensory information to be organized such that the occluded sensory information may be completed with or without a mental image. Perceptual frames shape amodal completion at the level of perceptual processing by shaping which perceptual objects may emerge from the sensory environment by way of mental imagery or otherwise. For instance, a poor-quality x-ray image with missing or blurred details can be differently completed and represented by a novice, a visual artist, an architect, a dentist, or a radiologist. Different domains generate different frames which, in turn, generate unique representations. Thus, the perceptual objects that experts perceive as a result of direct sensory stimulation can be uniquely enriched by amodal completion. Per the foregoing argument, this is because the perceptual frame operates as the principle or reason for which amodal completion occurs.
And you don’t need to be a perceptual expert to experience the effects of perceptual framing on amodal completion! One rather provocative example of this is included here as a pair of bimodal images presented in Figs. 4 and 5. If you have not seen these images before, you will likely be able to experience the effect of perceptual framing on your ability to amodally complete it. Have a look at Fig. 4 first without looking at 5. The image is likely ambiguous and will not mean much to you beyond splotches of black and white. However, once you have done this, look at 5 then back to 4. When you have seen the image in 5, 4 ceases to be ambiguous – you know what you are looking at because yo
Comments (0)