The Frame Problem and the If-Then Problem | Concepts at the Interface (2024)

6.1

The Frame Problem 155

6.2

Avoiding the Frame Problem Leads to the If-Then Problem 157

6.3

A Compound Architecture Still Faces the Frame Problem 161

6.4

A (Partial) Solution 163

6.5

How Cognition Partly Avoids and Partially Solves the Frame Problem 169

Chapter Summary 173

6.1 The Frame Problem

The last chapter painted a picture of the way concepts act as an interface between special-purpose informational models and conceptual reasoning. This chapter argues that the picture presented there shows how it is that human cognition manages to solve the notorious frame problem (to the extent that it does). Part of the solution is to avoid the problem, as I will explain, but that throws up the lesser-known ‘if-then’ problem. Human cognition has a way to navigate its way, well enough, between these two problems, by relying on the plug-and-play character of concepts.

The frame problem we are concerned with here is the problem of relevance-based search (Fodor 1987, 2000, 2008; Samuels 2010; Xu and Wang 2012; Antony and Rey 2016; Shanahan 2016). A person or computer carries out inferences in order to work out what is the case or what to do. How does the system select, in a way that is computationally tractable, which stored representations to perform inferences on? How do we take decisions on the basis of what is relevant without having to consider and reject all that is not relevant?

The problem arises because there is no simple rule to decide which information is relevant to a given question or task. Relevance is ‘isotropic’—relevant considerations can come from any direction (Fodor 1985; Chow 2013). I’m thinking about what to have for breakfast. It turns out that deforestation in Borneo is a relevant consideration. (Does the margarine contain unsustainable palm oil?) Unless potential relevance is constrained to a specifiable and tractable subset of everything I know and believe, it seems that, in order to assess which stored representations are relevant, the system will have to check through all stored representations and assess each for relevance. But the task of checking every piece of stored information for potential relevance is computationally intractable.

The frame problem arose long ago in artificial intelligence research.1 Cognitive and computational scientists building computer systems to perform complex tasks—tasks that display aspects of what in the human case we might call intelligence—found that the selection of relevant information from a large store of memories presented real practical problems. On the other hand, this seems to be something that humans do with some facility, perhaps giving an indication of a human cognitive competence that was not well modelled by classical computational systems.

A closely-related problem is the question of how to model abduction or inference to the best explanation. Here again the relevance of information is isotropic. Considerations that are potentially relevant to the goodness of an explanation can come from anywhere. In addition, inferring the best explanation seems to require an overall evaluation of a wide range of factors. It calls for a global assessment of a collection of beliefs. For example, the conclusion that the post-industrial increase in the earth’s mean temperature is largely caused by human activity is well supported. It is the best explanation of a wide range of data and phenomena. However, reaching the conclusion that this is the best explanation is extremely complex, requiring a very wide range of information to be weighed and assessed (data, models, scientific arguments). One reaches a conclusion on this question by taking a global assessment of many different considerations, some central, other peripheral, not all pointing in the same direction. Even if we do not actually perform a genuinely global assessment of the import of everything we believe in order to answer this kind of question, the phenomenon suggests that some kind of non-local computational process may be involved (§1.3).

The frame problem is, in the first instance, a problem for those designing computational systems. It presented itself as a major obstacle to feasible artificial intelligence when classical computational systems were the central tool of AI research. (With the rise of deep neural networks (DNNs), the frame problem faded into the background, as we shall see.) It is also a problem for those seeking to understand the mind computationally. This is why it is so significant philosophically. Our most successful account of intelligent thought and action is the representational theory of mind (RTM). Representations are manipulated physically in ways that are faithful to their semantic content. What Fodor calls ‘central cognition’ appears to be able to retrieve information according to relevance, and to perform the non-local computations required for abduction. How is that achieved by the physical manipulation of representations, in a way that is computationally feasible? Fodor declared that the failure to answer that question makes the workings of central cognition the great mystery of cognitive science (Fodor 2000, pp. 23, 99; Xu and Wang 2012).

In section 6.2, I show how DNNs succeed in avoiding the frame problem, to a large extent. However, in doing so, they end up facing a problem of their own, which I label the ‘if-then’ problem. The if-then problem and the frame problem are in some ways complementary, but when a task is not susceptible to the if-then solution, and calls for broadly-logical reasoning from explicit representations, the frame problem still arises (§6.3): which representations should reasoning be performed on? In section 6.4, I argue that the account of concept-driven thinking advanced in Chapter 5 offers a partial solution: special-purpose informational models can be used as a way of generating relevant information. With concepts acting as an interface between special-purpose informational models and general-purpose reasoning, cognition can partly avoid, and partially, imperfectly, solve the frame problem (§6.5).

6.2 Avoiding the Frame Problem Leads to the If-Then Problem

Computational modelling offers insights about how human cognition might solve the frame problem. In recent years concern with the frame problem has subsided in computer science. In AI research, classical computation has been eclipsed in most areas by DNNs. The interest of DNNs is not that they are realistic psychological models—they clearly differ in profound ways from human cognitive competences—but because they show how certain problems can be solved in principle, and potentially offer partial models of particular aspects of human cognition. DNNs do not seem to face the frame problem—at least, relevance-based search does not arise as a concrete issue that modellers are forced to grapple with. As it has receded as a practical concern, theoretical work on the frame problem has also subsided. Nevertheless, the problem has not gone away. As we will see in this section, DNNs do not so much solve the frame problem as avoid it. In the following section (§6.3) we will see where the bump in the carpet has popped up now.

DNNs in effect build in assumptions of relevance. By having a huge number of free parameters (e.g. 1.7 trillion in the GPT-4 large language model), and by being given enormous amounts of data, DNNs can be trained to produce appropriate outputs in response to a wide range of different inputs. They store what they have learnt, not in the form of discrete memories of the data they were trained on, but in the entire pattern of weights distributed across their interconnected layers. When the system encounters a new input, it has no need to retrieve information stored during training on which to perform inference. It just proceeds to produce the output that has been trained into it by experience. Each relevant past experience has left its trace on the system’s processing dispositions through gradual adjustments made, across many successive cycles of training, to the whole pattern of weights.

It was long thought that this approach was inadequate to deal with real-world tasks. Fodor attributed the inability of computational models to perform these tasks to their failure to solve the frame problem, declaring:

the failure of artificial intelligence to produce successful simulations of routine commonsense cognitive competences is notorious, not to say scandalous. We still don’t have the fabled machine that can…translate everyday English into everyday Italian; or the one that can summarize texts; or even the one that can learn anything much except statistical generalizations.

(Fodor 2000, p. 37)

But now, of course, DNNs are actually doing quite well at performing these tasks. The breakthrough came in 2012 when a convolutional DNN broke all records for categorising pictures from the ImageNet data set (Krizhevsky et al. 2012). (I vividly remember this result for the way it reinvigorated my undergraduate lectures on connectionism that autumn.) Since then, DNNs have demonstrated considerable facility at ‘summarizing texts’ (Yousefi-Azar and Hamey 2017; Bubeck et al. 2023) and ‘translating everyday English into everyday Italian’ (Bahdanau, Cho, and Bengio 2014; Stahlberg 2020; Bubeck et al. 2023) (Google translate has transformed off-the-beaten-track travel); also many other tasks.

What these results show is that statistical generalisation is much more powerful than previously thought. Yes, all kinds of background information is potentially relevant to translating a sentence. But it turns out that strikingly good results can be achieved by encoding input-output dispositions that implicitly encode particular assumptions of relevance. The pattern of weights reflects the way incoming information was relevant to producing the correct output for samples it was trained on. The perhaps surprising finding is that these assumptions allow the system to generalise effectively—to produce appropriate outputs in response to inputs that it has not previously encountered.

DNNs share with previous modular approaches to the frame problem an underlying limitation. The assumptions that are implicit in their operation only work well within a specific domain. Convolutional neural networks build in and learn assumptions about which features are important for categorising images. These assumptions are implicit in the content-specific dispositions they acquire as a means to solving the input-output problem on which they were trained (§3.2). Those dispositions are not suited to performing tasks in other domains.

This limitation may be circumvented by deploying multiple modules, each designed to deal with a different specific domain (Shanahan and Baars 2005). Image processing can be done by a trained ConvNet, language processing by a Transformer. Human cognition does involve multiple special-purpose informational models, as we have seen (Chapter 4). Many of these depend on implementing a suite of content-specific dispositions. Extensive experience in a domain, through the course of evolution and individual learning, endows the system with a set of if-then dispositions appropriate to its domain. There remains the problem of how to decide which inputs go to which systems. Perhaps a competitive process can help here, especially for inputs that are sufficiently distinct that they make no sense when presented to the ‘wrong’ module (Shanahan and Baars 2005). A module trained to process language will not settle on a specific output when presented with data from a visual image. A visual processing module presented with the same data will settle on the categorisation ‘elephant’ but would make no sense of linguistic data. On some views, the winning outputs are integrated together in a common working memory system (Shanahan and Baars 2005). On other views, all that’s needed is a collection of competing special-purpose modules—so called ‘massive’ modularity (Carruthers 2003). We will return in the next section to the question of integration across different special-purpose systems.

A second criticism of the modular approach is that, even when we confine our attention to one special-purpose module, its behaviour will be insufficiently flexible to produce appropriate outputs. Often what is appropriate depends heavily on the context. Hearing ‘fire!’ shouted by the house manager in a theatre prompts quite different behaviour than when it is declaimed by an actor in the play; so too on a cold camping trip; or at a military training ground.

DNNs have shown that this problem is often surmountable. Context is something that the system can register as another aspect of the input. The appropriate output is not just a reaction to the currently-presented word or stimulus, but to a short history of information that the network has been fed. In Transformer-based large language models, the output is just the predicted next word, but the input is a long chain of text. What the system outputs next after the last word depends heavily on what came before. For example, given a joke, the PaLM model can output a string of words that explains the joke.2 But the input here is not just the joke, but a whole mini-essay that also gives the system two prior examples of jokes with explanations. With all that text as context, the assumptions of statistical relevance that have been trained into PaLM’s 540 billion parameters mean that the output that follows this long input is a text that amounts to the explanation of a joke. If DNNs are trained to deal with inputs that consist of such long streams of data, they can be highly sensitive to context—just by being sensitive to different features of the input.

If context counts as just another input, then the system has to go into a different state for each context it might encounter. Then, when the final element of the input comes along (‘fire!’), it can output the response that is appropriate in that context. The system deals with the past by changing the state of the processor. Botvinick et al. (2019) give the example of a DNN that was trained on an array of different reinforcement learning problems. Different problems consist of different stimuli with different stimulus-response-outcome probabilities, but all the problems share the same structure. The network does ‘meta-learning’, acquiring a set of weights that allows it to learn about a particular problem—a particular set of stimulus-response-outcome probabilities—on the fly, in its network dynamics. The context is a string of past inputs, and the system deals with the past by changing the state of the processor.

This raises a problem that C. R. Gallistel has long pressed as an objection to artificial neural networks (Gallistel 2008; Gallistel and King 2009). Suppose that, on the way home one evening, an agent observes that a certain tree has come into fruit. When, the next morning, they decide whether to turn left or right on leaving their shelter, the observation the night before can act as part of the input on which their behaviour is conditioned. The observed state of the tree is an input, I1. Seeing the fork in the path the next morning is another input, In. All the observations they make in between count as further inputs, I2 to In-1. Their output OL, turning left, is a response to the (complex) input I1, I2,…, In. Had they instead observed that the tree was not in fruit the night before, that is a different input, I1’. The agent will behave adaptively if they are disposed to make a different output, OR (i.e. turn right), when presented with the (complex) input I1’, I2,…, In. To condition its behaviour appropriately on the distant past, the system would have to have appropriate input-output dispositions with respect to extremely long chains of input. The DNN solution is to deal with the past by changing the state of the processor. That means it has to allocate dedicated processing resources to each potential input it might encounter. It was a significant discovery that a neural network could be trained to respond appropriately to so many different inputs, in a way that generalises accurately to other inputs of the same type. But this depends on the network model having an enormous number of free parameters. In the largest Transformer-based language models, the input—the prompt—can now be very long indeed. However, increasing the length of the prompt has a dramatic effect on how much computing power it takes to train the system and how many parameters are needed. This is a symptom of the weakness Gallistel pointed to. A system that has to devote dedicated processing resources to each chain of input it might encounter will eventually run up against the ‘infinitude of the possible’ (Gallistel and King 2009, pp. xi, xvi, 51, 136–48). It is a practical impossibility to encode a separate processing disposition for every potential input string to arbitrary depth into the past.

Botvinick et al. (2019) raise the same problem as a practical obstacle for DNN modellers. The meta-learning solution they demonstrated for reinforcement learning will only extend so far. Even very large language models like GPT-4, which can take as their context an input thousands of words long, are poor at performing tasks that call for a longer term memory within the context of the task, for example to write a novel with a coherent overall narrative (Bubeck et al. 2023). Botvinick et al. suggest a solution to the if-then problem, namely to store explicit memories of circ*mstances encountered and outcomes received. This echoes Gallistel and King’s argument that a practical computational system for solving real-world tasks must store and process explicit memories.

Taking stock, DNNs have shown that the if-then solution to the problem of relevance and context-sensitivity is much more effective than was ever imagined when modular architectures were originally touted as a solution to the frame problem. But the if-then way of taking account of context gives out eventually, as it encounters the ‘if-then problem’: the ‘infinitude of the possible’ and the need to devote dedicated processing resources to each long chain of input it might encounter. That obstacle can be overcome by remembering the past explicitly—that is, not by changing the system’s input-output dispositions (e.g. weight matrix), but by storing explicit memories of circ*mstances it encounters.

6.3 A Compound Architecture Still Faces the Frame Problem

We have canvassed two different ways of dealing flexibly with variable context: learned if-then dispositions and inference from explicit memories. These approaches have complementary costs and benefits (Botvinick et al. 2019; Shea 2023b). The if-then solution is learning-heavy and computation-light. It calls for a large amount of experience to acquire a range of useful input-output dispositions; but then it can produce an output rapidly in response to the current input. A system that stores explicit memories can potentially learn what to do much faster, even after a single exposure, but calculating how to respond to the current input is typically more computationally demanding—it may involve a tree-search through a combinatorially large space of chains of possible states and outputs.

We have seen that human cognition deploys systems that work in each of these ways. Many special-purpose informational models rely on content-specific processing dispositions. On the other hand, reasoning over conceptual representations can deal flexibly with stored explicit memories. Penn et al. (2008) argues that this kind of compound system is a good way to model human cognition. Researchers working on artificial intelligence also construct compound systems. Botvinick et al. (2019) discuss a compound model that uses a DNN to learn the problem space, and combines that with a gradually expanding memory record of every situation it has encountered (world state, action, reward) (Graves et al. 2016). When encountering a new situation, the system works out what to do by comparing the new situation to the most similar situation stored in its episodic memory, picking the action that proved most rewarding in that situation in the past. This means that the system can do one-shot learning, repeating what worked on a single occasion in the past without needing to have each experience painstakingly re-presented multiple times. Having once discovered what to do in response to a new situation, retrieval from episodic memory will tell it what to do when it encounters that situation, or one sufficiently similar, again.

Human cognition can perform content-general inferences on explicitly represented information retrieved from memory—both semantic memories and suitably conceptualised episodic memories can enter into broadly-logical reasoning. But memories can also be subject to content-specific inferences, for example in special-purpose informational models. The same is true in AI architectures. Many teams are experimenting with using explicit memories to transcend the limitations of a purely if-then solution, sometimes by processing those memories simply as further inputs to a trained DNN. Ryoo et al.’s (2022) model stores as explicit memory a summary of its whole history of inputs. The memory is read, written, and processed using a Transformer-based language model at each step. Park et al. (2023) simulate a group of agents living in a simple artificial world. The agents act and interact by receiving text as input and producing text as output. Each also has a memory of its own individual characteristics, circ*mstances, and preferences. Adding these memories as part of the input to the Transformer model means that each ‘agent’ produces outputs that reflect that individual’s character and situation.

When explicit memories act as inputs to a trained DNN, those inputs are just acting as further contextual cues to a trained if-then disposition. The system is still working within the range of behaviours it has been trained, end-to-end, to perform. It doesn’t take us beyond the if-then way of avoiding the frame problem. However, the capacity for content-general reasoning does offer the chance to go further. Given an explicit representation of a situation completely outside the range of situations on which its if-then dispositions were trained, a system endowed with the capacity for content-general reasoning can still do something sensible. It can perform broadly-logical reasoning to combine the things it knows and reach new conclusions. This gives the system something worthwhile to do with memories that fall outside the range of things it has been trained to have specific dispositions to respond to.

The same is true for generating novel thoughts. General-purpose compositionality means that concepts can be combined in new ways to formulate completely novel thoughts, representing situations that fall far outside the system’s experience. Einstein could formulate the idea of running at the speed of light to pursue a light beam (Einstein 1970, p. 53).3 The capacity for broadly-logical reasoning means the thinker can perform inferences on novel thoughts, even when their trained if-then dispositions are of no use. Einstein inferred that he should expect to see the beam of light as an electromagnetic field at rest and spatially oscillating. The capacity for content-general inference allows human thinkers to deal intelligently with novelty: novel explicit memories or novel combinatorially-generated thoughts.

To take stock, the frame problem is avoided by systems that are trained to have sufficiently rich input-output dispositions, but that eventually runs into the if-then problem. A solution to that is to store explicit memories and, in the human case at least, to compute with them in content-general ways. This is where the bump in the carpet re-emerges. Although the two solutions are in some ways complementary, a compound approach that relies partly on stored explicit memories will then face the problem of selecting which memories to compute with. It will still need to overcome the frame problem.

We see this in the AI systems that deploy a compound architecture that includes an explicit memory store. The system in Graves et al. (2016) stores explicit memories, as we just saw. It can work out what to do in the current situation by repeating the action that led to reward when the same or a similar situation was encountered in the past. But to retrieve memories of the same or similar situations, it has to perform an operation that takes all stored memories as input. Pritzel et al. (2017) build a reinforcement learning system that writes all experiences to memory. Although they have a more efficient mechanism for calculating which past experiences are most relevant to the current context, doing that depends on first using (actually approximating) a time-consuming ‘k nearest neighbours’ search across its whole memory store (Yang et al. 2020, p. 129276). Park et al. (2023) store, for each virtual agent, a comprehensive record of every event experienced by that agent. A small subset of these memories are retrieved to act as part of the input at a given time-step. Memories are selected based on their recency, importance, and relevance to the current situation. Calculating relevance, however, involves calculating the similarity between the current situation and every event stored in memory. In short, these compound models face the problem of relevance-based search and deal with it using operations that take account of the entire store of memories.

What these computational models suggest is that, although there are advantages to a compound architecture that overcomes the if-then problem by storing explicit memories, the frame problem—the problem of how tractably to search the store of memories for relevance—then re-emerges.

6.4 A (Partial) Solution

I am going to suggest that the solution deployed in human cognition is not just a compound, but a hybrid—a hybrid, in that it can take advantage of the if-then approach as a way of searching memory. The last chapter sketched the picture, with concepts mediating between general-purpose reasoning and special-purpose informational models. This is imperfect: it is not a complete solution to the frame problem, but a way of approximating a solution. I will argue that it offers a good picture of how human cognition manages partly to avoid the frame problem and, when it does arise, to deploy a partial solution.

A suggestion in the literature on how human cognition deals with the frame problem is to use content-addressable memory. The idea is that only a small subset of memories are retrieved as being potentially relevant in the current context. For example, the system could store information about a given individual or category in a mental file (Chow 2013). Relevant information can be retrieved from memory by searching all the information in the mental file. Which files should be accessed? Carruthers suggests that, in the context of considering a linguistic statement, one should perform a content-based search of all the concepts expressed by the statement (Carruthers 2003). That would certainly generate some relevant information, but for an even moderately complex real-world problem, it would still involve searching through an enormous number of representations to check each for relevance. In considering, ‘should I have cereal, toast, or fruit for breakfast?’, does my decision-making system really need to check for relevance everything I know or believe about breakfast cereals, toast, and fruit? Each of these concepts content-addresses a huge amount of information (not to mention my concept of myself). And even that wide-ranging approach would miss many relevant considerations, unless it were to expand outward and access information addressed by concepts used within the files (e.g. margarine in the toast file). So while content-addressable memory is surely part of the solution, we still need an account of how it can be implemented in a way that is not, on the one hand, too myopic to be useful or, on the other hand, too demanding to be feasible.

My suggestion is that this is achieved in our case, at least in part, by leveraging the assumptions of relevance found in special-purpose informational models, especially in their content-specific processing dispositions. Here I will consider two kinds: the direct-CS transitions that take place between conceptual representations (§3.4); and non-local transitions that take place through a form of parallel constraint satisfaction in a representational state space (§4.4). Most of our focus will be on the latter, but I start briefly with the former.

Recall from section 3.4 that there is evidence that content-specific transitions take place between conceptual representations directly. Just tokening an occurrent belief may dispose the thinker to token a consequent thought. For example, someone who thinks Moby is a whale may thereby be disposed to think Moby is a mammal (similarly if moby is replaced with any other singular concept). The inferential disposition is ‘built into’ the concepts and does not require a general premise (i.e. all whales are mammals). This is quite unlike looking up all the information in a mental file, since only a small number of transitions are potentiated. These inferential dispositions in effect build in assumptions of relevance, not for a specific domain (as in the visual system), but for a specific concept. This is not a solution to the problem of searching a list of memories—there is no search and selection process operating—but these kinds of transitions are probably part of the way that cognition introduces relevant information into the stream of thought.

The second solution is to rely on similarity or proximity in a representational state space (Churchland 1998; Shea 2007; Kriegeskorte and Kievit 2013). As we saw in section 4.4, contents in special-purpose informational models are sometimes represented in a state space, on the basis of which people make judgements of similarity (Charest et al. 2014), or judgements about other relations (Nelli et al. 2023). This can be deployed as a way of looking up relevant information: of retrieving memories that are similar to the current context.

These similarity spaces are found, not only within domain-specific systems like visual face processing, but also more widely. Huth et al. (2016) recorded fMRI data while participants listened to hours of radio stories. They modelled the meaning of words in the audio stream using word embeddings (where the vector for a word characterises which other words it tends to co-occur with) and used a regression model to predict voxel-by-voxel brain activity from the word vectors. They found that, using their regression model, they could predict activity in many cortical areas based on which word was being presented in the auditory stream. The weights in the regression model revealed activity organised along a number of semantic dimensions, for example a dimension with perceptual and physical categories at one end and human-related categories (social, emotional) at the other. The axes of variation in the neural signal separate words into categories like: tactile (‘fingers’), visual (‘yellow’), numeric (‘four’), locational (‘stadium’), abstract (‘natural’), temporal (‘minute’), professional (‘meetings’), violent (‘lethal’), communal (‘schools’), mental (‘asleep’), emotional (‘despised’), and social (‘child’). Activity in diverse neural areas reflects variation along these semantic dimensions, particularly in superior temporal cortex (long associated with semantic processing), parietal cortex, and prefrontal cortex.

In short, there is now considerable neural as well as behavioural evidence for the kinds of representational spaces postulated by Churchland (1998, 2012). Both seeing images and understanding sentences generates representations that are organised into similarity spaces. These spaces need not be domain-specific. They encompass the kinds of abstract semantic dimensions found by Huth et al. (2016). What is interesting for our purposes is that making transitions within a semantic space offers a computationally tractable way to perform relevance-based search.

For example, when I am considering how to behave in relation to one person X, I can move to representations of similar individuals, Y and Z, and recall how I acted in relation to them in the same situation. Moving to nearby portions of semantic space is a way of prompting relevant information. But this is not like looking up and searching through all the information in a mental file, checking each piece of information for relevance. The shape of the semantic space effectively builds in certain assumptions of relevance. To this extent, it is a way of reusing the if-then way of avoiding the frame problem as a (partial) solution to the frame problem.

Furthermore, semantic spaces offer a ready way to deal with context-sensitivity. Representations organised in a semantic space are related along several different semantic dimensions at once. For example, face stimuli are automatically organised along dimensions of trustworthiness and dominance (Oosterhof and Todorov 2008). Relevance can be assessed along just one of these dimensions, or any combination of them. For example, in a dynamic state space in prefrontal cortex that registers both colour and direction of motion, activity can be projected along the dimension—colour or motion—that is relevant to the current task (Mante et al. 2013). The dynamics of a network can be changed by ‘clamping’ one or more dimensions and considering only relations in the remaining subspace. In the last chapter (§5.8) we saw how the content of one concept can act as a contextual cue which constrains the processing taking place in a special-purpose system activated by another concept with which it is combined. That offers a model of how different dimensions of similarity (trustworthiness, dominance, etc.) are selected in different situations.

Grand et al. (2022) compared the way human participants and a DNN arrange objects along different dimensions (Fig. 6.1). For example, tiger and dolphin are judged as similar in respect of size but very different in respect of dangerousness. This is predicted by activation patterns in the trained DNN. Representations are close together in state space when projected along the size dimension but far apart along the dangerousness dimension. Applying this insight to a space representing people, when I represent an individual X in that space it should be straightforward to retrieve individuals who are similar with respect to dominance. A contextual cue can thus act as a ‘clamp’ so that retrieval in a semantic space takes place along a contextually relevant dimension. The same space can be sampled along more than one dimension. This means that, when retrieving memories to use in inference on a given occasion, a single informational model can be sampled for relevance in more than one way.

The Frame Problem and the If-Then Problem | Concepts at the Interface (1)

Fig. 6.1

(Top) Items arranged in a high-dimensional semantic space (illustrated here in three dimensions) project onto a semantically-significant dimension of variation (small to large). (Bottom) The same items can be projected onto different semantically-significant dimensions of the underlying space. Dolphin and Tiger are close along the size dimension but distant along the dangerousness dimension. From Grand et al. (2022). See the open access online edition of the book for the full colour figure.

Open in new tabDownload slide

Doesn’t this just push back the problem? Retrieval can rely on assumptions of relevance implicit in semantic state spaces, but how does the system learn the state spaces over which this occurs? The answer is that these spaces are learnt, laboriously, from experience, as we have seen. We have good empirical evidence that this is the case and plausible computational models of how it occurs. The frame problem is not the problem of how semantic spaces or categorical dispositions could be acquired in the first place (important as that question is). State spaces may also figure in an account of how some concepts are acquired, namely through alignment between partially incomplete state spaces (Aho, Roads, and Love 2023; see also Søgaard 2023).

Relevance-based search was challenging for RTM because it seems that the search for relevant information is non-local—it somehow takes into account a whole collection of information. As we saw in section 1.3, we do in fact have computational models, consistent with RTM, where transitions effectively take account of a collection of information in parallel. Such computations are not mysterious if we don’t limit ourselves to step-by-step classical computations (as considered by Fodor in declaring the mystery). When proximity in the state space of a trained neural network is used to retrieve relevant information, that is a non-local computation (of the kind highlighted in §1.3).

Assessing similarity is non-local in the sense that it involves weighing many different characteristics at once and calculating their resultant. How is that computationally tractable? Within a semantic space it works because the geometry of the space reflects all these different features at once. The geometry of the space is trained into the network by experience. Once trained, closeness in similarity space reflects an overall assessment that integrates lots of features at once. Many different samples have been encountered and had an effect on the local gradients at each point in the space, each experience having more impact in some areas than in others. The moves then made in the trained similarity space are computationally undemanding, but reflect that wealth of experience. This is captured in the model by a step that takes account of a whole matrix of values at once. It occurs in real neural systems by a process that takes place across a whole array of neurons in parallel.

Here is an analogy. Consider a comet moving through the solar system. When it is at a certain point, we might ask how the comet calculates where to go next. Its next step will depend on its interaction with a huge array of objects, some close by, others distant. Large numbers of nearby asteroids will each have an impact. A really close asteroid could have a big effect. Much further away, the sun will have a large effect; also to some extent each of the planets. To parody the frame problem, it looks like, in order to work out where to go next, the comet has to calculate the effect of each of these other celestial bodies on its future trajectory. How does it make so many calculations in real time? Why isn’t it paralysed in one spot, working out where to go next?

The answer, of course, is that the comet does not need to interact separately and serially with all the other bodies. They all have an effect on the local gravitational field and the comet reacts to that. The local gravitational field is the resultant of the integration in parallel of a huge number of different forces. By reacting to the resultant force the comet’s behaviour reflects the parallel effects of a huge array of interactions all at once. Moves in semantic state space are like that in that the relevant representations are accessed by making moves in a space that reflects parallel constraint satisfaction across a whole collection of information.

Another example is a computation that proceeds by exploring a whole state space in parallel. We see examples in some computational models of route calculation in the hippocampus (mentioned briefly in §1.3). The calculation is based on a process that takes place in parallel across the whole array of place cells. This effectively sweeps through many different routes that trace back from a given goal to the location of the agent (Samsonovich and Ascoli 2005; Khajeh-Alijani et al. 2015). The relevant computational property is unlikely to be simply a matter of activation, but instead a dynamical property like the phase offset between the activation at different locations during synchronous activity, activity like the sharp wave ripples observed electrophysiologically in the hippocampus. Recent work suggests that sharp wave ripples may be the basis for episodic memory recall in the hippocampus (Norman et al. 2019), in which case this would be clear-cut example of a global computation that performs relevance-based search and retrieval.

6.5 How Cognition Partly Avoids and Partially Solves the Frame Problem

I have sketched a way that a hybrid computational system can address the frame problem, partly by avoiding it with dedicated if-then computations, and partly by approximating a solution in areas where the limits of the if-then solution are reached. This picture offers us an account of how human cognition solves the frame problem, to the extent that it does. Content-general reasoning with conceptual representations allows us to consider novel scenarios and sensibly process representations that transcend the experience on which our special-purpose informational models have been trained. However, since concepts act as an interface to special-purpose informational models, those systems can be re-purposed offline, in simulation mode, to generate relevant considerations on which to perform inferences in thought. Doing this across multiple different built-in assumptions of relevance can approximate an isotropic search for relevance.

How does my suggestion differ from other proposed solutions to the frame problem? A first observation is that DNN models have shown that the if-then way of avoiding the frame problem is much more powerful than previously thought (for example when Fodor was writing about the frame problem in the 1980s and 1990s). It turns out that you can get a long way with systems that build in implicit assumptions of relevance—provided the systems have enough exposure to experience in their evolutionary history, and especially in their learning history, to be able to realize a suite of complex input-output dispositions.

A second element of my proposal is the feature of conceptual thought that formed the centrepiece of the previous chapter. We reach conclusions in conceptual thought not just by reasoning from explicit memories, but also by running simulations in special-purpose systems. This is a way that conceptual thought can take advantage of the domain-specific assumptions of relevance that are built into special-purpose informational models. Concepts can act as mediators between a range of different special-purpose systems. Suppose I’m thinking about my extended family sitting around the living room on a social occasion. A great aunt arrives. I can simulate what could happen next by relying on the implicit assumptions of my system of naïve physics; my system for tracking moving agents; and my system for tracking social hierarchy. I can see that the aunt is most likely to move towards the gap on the sofa, but I predict that this move will be disastrous because she will then be rude to the relative that she would sit next to. Trained if-then modules each have their own assumptions of relevance, but conceptual thought can in effect rely on lots of different assumptions of relevance, of diverse kinds, mediating between them to generate potentially relevant considerations and evaluate them.

The combinatorial power of conceptual thought is important here. Its relevance to the frame problem only becomes clear when we focus on the way concepts allow us to rely on the assumptions of relevance contained in special-purpose informational models. Concept compositionality is then seen as a way of relying on and juxtaposing different kinds of assumptions of relevance from different domains. That is quite different from looking up all the information connected to a concept (all the entries in a mental file) and assessing each for relevance. Each special-purpose system just throws up the one or two considerations it takes to be most relevant. (It is obvious, as soon as I simulate the social hierarchy, that the great aunt’s sitting there would be disastrous.)

This way of retrieving relevant information on which to perform deliberate inference circumvents the need to do what the classic formulation of the frame problem asks us to do, namely to search through a large list of memories and select those that are potentially relevant on which to perform inference. Running a simulation in a special-purpose system need not involve searching a list of memories. The system has a disposition to token various representations in various circ*mstances (both online, in response to input, and offline, in simulations). Those representations need not be stored explicitly anywhere. We have effectively re-cast the problem. The benefit of the capacity for reasoning with explicit representations is the ability to deal with the past by reasoning with explicit memories, rather than having to treat the past as a further contextual cue, part of one long chain of inputs. One way to do that is store an explicit representation of each past situation (as in the Pritzel et al. (2017) model, say). Doing it that way throws up the problem of searching the list for relevance in a tractable way. But the benefits that accrue from reasoning with explicit representations of the past don’t require the memories to be stored that way. Information can be stored in the form of trained dispositions to token an explicit representation given certain inputs. Models of episodic memory based on pattern completion work like that (Teyler and Rudy 2007). I have been arguing that deliberate thinking can rely on memories generated by special-purpose systems in that way. This does not displace the question of how it is that relevant memories are generated. But it does show that search through a list of explicit memories is not the only option.

We saw that representational state spaces provide a ready assumption of relevance in their similarity structure. Projecting along different dimensions (e.g. size and dangerousness) allows the same representational space to be sampled for relevance in a number of different ways. My suggestion is that, when we are engaging in deliberate conscious thinking to work out what is the case or what to do, we can effectively retrieve a range of different relevant considerations by running simulations in different special-purpose informational models and, within a model, by sampling for similarity along a number of different dimensions. Taken individually, none is a comprehensive search of all relevant information; taken together, they can go some way to approximating an isotropic search, one in which relevant information can come from many different directions.

A further refinement is that deliberate thinking has access to representations about how to think, including tips for searching for relevance. Often we learn these socially. One set of strategies involves randomising in some way, to put oneself in a new context: move to a new physical location, look up a random word in a dictionary, think of answers beginning with ‘T’, ask ‘who, what, where, when, why, how?’, etc. The new context provides a new way of probing special-purpose informational models for relevant information. Other socially-acquired strategies are more specific. For example, if you’re planning a mountaineering trip to a remote location, don’t forget to think about what type of cooking fuel you’ll be able to get. In between randomising and very specific pieces of relevance-searching advice there is a whole suite of tools for recall: tools that we learn socially. That we create and share these tools in itself suggests that the problem of relevance-based recall is a real practical problem faced by human cognition.4

Taken together, these tools and techniques give deliberate thought ways of finding information that is potentially relevant in diverse and heterogeneous respects. Once we generate a limited set of relevant information, conceptual thought adds the capacity to reason step-by-step with this information. That is how I get from contemplating breakfast to thoughts about the rainforest. Thoughts of foods and flavours generate some options. But I can locate those concepts in a semantic space that has quite abstract dimensions. I may, for instance, organise consumer goods by their environmental impact. (Not very accurately, to be sure, but perhaps with some crude evaluative feel.) That throws up a dimension of contrast between margarine on toast and sliced apple, say, and a dimension of relevance that brings to mind the palm oil plantations of Borneo. Some recent AI models have this hybrid character. Although large language models may not use broadly-logical reasoning in their internal processes (Traylor, Feiman, and Pavlick 2021), they can approximate or display the capacity for broadly-logical reasoning in their outputs, especially when appropriately prompted (Bubeck et al. 2023). This means the same underlying LLM can be alternately prompted in a hybrid way, first in a way that encourages it to rely on its learned content biases (i.e. assumptions of relevance), and then re-prompted to encourage it to perform logical inference on these representations (Creswell, Shanahan, and Higgins 2023). Moving back and forth between these two kinds of prompting improves the system’s performance. This exemplifies the kind of divide-and-conquer strategy I have been advocating.

When we are constructing a suppositional scenario in the cognitive playground, that in itself may act as a prompt for relevance. As I build up a picture of my ideal breakfast-in-bed, I fill in bits that are obviously missing, like a teaspoon to go with the teacup, and also configural properties of the scenario, like the fact that I have imagined too many different items so that they won’t fit together on the tray. Representations filled in as a result of constructing the suppositional scenario can act as further contextual cues for retrieving relevant information from memory.

These ingredients do not amount to an exhaustive way of searching for relevance. Relevant information can still be overlooked. Reasoning is a powerful domain-general way of reaching new conclusions, but it can lead the agent in quite the wrong direction if relevant and important information is not fed into the decision-making process. So the approach I have sketched is an imperfect solution. However, human decision-making is imperfect. We are sometimes myopic and overlook considerations whose relevance would be obvious, if only we had considered it. We are famously biased in the factors we take into consideration; and the information that comes to mind can be powerfully primed by context (Tversky and Marsh 2000; Azzopardi 2021). (That is the downside of relying on built-in dispositions about how context implies relevance.) We can be very effective in situations we have encountered many times before, but if we want to have a good chance of recalling information relevant to a novel situation, we often have to rely on explicit strategies and mnemonics.

Most of these elements have been discussed before in relation to the frame problem, in some guise. The role of concepts as mediators has not been emphasised in previous approaches. Carruthers suggests a similar role for sub-vocalised language (Carruthers 2003). But he thinks of language as a way of accessing a collection of content-addressed beliefs, not as a way of driving simulations in special-purpose systems. Nor does the complementarity between the frame problem and the lesser-known ‘if-then’ problem feature much in the previous literature. It is in the context of the recently-discovered power of DNN-based if-then approaches, and the fact that they nevertheless still face limits that call for the storage of explicit memories, that the trade-off between these two different styles of computation becomes clear (Botvinick et al. 2019; Shea 2023b). I also make explicit the way that built-in assumptions of relevance in special-purpose systems can be relied on in memory retrieval, and add a model of how they can be polled for relevance in more than one way.

In short, the model of concept-driven thinking developed over the preceding chapters offers a way of avoiding and solving the frame problem. It is a realistic computational proposal for how representational processing can be configured to sail a middle course between the if-then problem and the frame problem, taking advantage of the complementary costs and benefits of each. Most importantly for our purposes, it is an empirically plausible hypothesis as to how human cognition manages to avoid and solve the frame problem, to the extent that we do.

Chapter Summary

6.1 The Frame Problem

This chapter is about how human cognition manages to solve the frame problem and the lesser-known ‘if-then’ problem. The frame problem is the problem of how cognition manages to select relevant information on which to perform inferences. Relevant considerations can come from anywhere (isotropy), but checking every piece of stored information for relevance is computationally intractable. (p. 156)5 This was a practical problem for good old-fashioned artificial intelligence, but it seems that humans solve it with some ease. Closely related is the problem of abduction, which furthermore seems to involve the non-local weighing of a range of different considerations at the same time. The frame problem is also a problem for theorists trying to understand the mind, Fodor’s great mystery of central cognition. (p. 157) Sections.

6.2 Avoiding the Frame Problem Leads to the If-Then Problem

With the rise of DNNs, the frame problem has receded as a practical issue for AI researchers, but DNNs do not so much solve the frame problem as avoid it. DNNs do not store explicit memories, but effectively build in assumptions of relevance in their learned weights. (p. 158) It was long thought that this approach was inadequate to deal with real-world problems, a failing that Fodor attributed to the failure to solve the frame problem. But now they can. Surprisingly, given enough training, DNNs can learn processing dispositions that build in appropriate assumptions of relevance, and which generalise effectively.

DNNs share a limitation with earlier modular approaches: their built-in assumptions only work within a specific domain. That can be circumvented to some extent by having a range of different domain-specific modules, but this calls for some way of deploying them selectively and integrating their outputs. (p. 159) And even a special-purpose system needs to be sufficiently flexible to produce different outputs in different contexts. DNNs treat context as just another aspect of the (very long) input. That is to deal with the past as a contextual input that changes the state of the processor. (p. 160) This raises a problem pressed by Gallistel: the system has to allocate dedicated processing resources to each potential input it might encounter. Botvinick et al. (2019) suggest a solution, which is to store explicit memories of the circ*mstances encountered and the outcomes received.

(p. 161) In short, although DNNs have shown that if-then dispositions are much more effective in avoiding the frame problem than previously thought, this solution gives out eventually; a suggested solution to this ‘if-then’ problem is to store explicit memories.

6.3 A Compound Architecture Still Faces the Frame Problem

Learned if-then dispositions and inference from explicit memories have complementary costs and benefits. Human cognition deploys both approaches; some AI models do the same. (p. 162) Stored memories can enter into content-general and content-specific inferences. If retrieved memories just act as further inputs to trained if-then dispositions, then the system has to have been trained on and dedicate resources to responding to each such input; by contrast, a capacity for content-general reasoning can be applied to a representation of a situation wholly outside the system’s training experience. Content-general reasoning can also be applied to novel thoughts; general-purpose compositionality can generate such thoughts.

(p. 163) A compound architecture, while helpfully taking advantage of the complementary profiles of the two approaches, still faces the frame problem—the problem of selecting which memories to compute with. Compound AI models do face this problem (examples), and deal with it by performing operations on the entire store of memories. The frame problem has re-emerged, and exhaustive search, while feasible in the models, does not amount to a solution.

6.4 A (Partial) Solution

I argue here that the solution deployed in human cognition is a hybrid, with plug-and-play concepts taking advantage both of if-then dispositions and general-purpose reasoning, and reusing the if-then approach as a way of retrieving relevant memories.

(p. 164) Content-addressable memory may be part of the solution but, without further constraints, looks to be either myopic or too wide-ranging. I suggest retrieval can rely on content-specific dispositions. Direct-CS transitions effectively assume that certain contents are relevant, which they introduce into thought directly. (p. 165) A second way of introducing relevant information is to retrieve representations that are nearby in a representational state space. These similarity spaces are found widely, with representations organized along a number of semantically-relevant dimensions. Making transitions within a semantic state space offers a computationally tractable way to perform relevance-based search. The state space effectively builds in certain assumptions of relevance.

(p. 166) Semantic spaces offer a ready way to achieve context sensitivity. Relevance can be assessed along any one of several different dimensions, for example: colour or motion of a stimulus, dominance or trustworthiness of individual people. A contextual cue can act as a ‘clamp’ so that retrieval takes place along a relevant dimension, as with the contextually-relative judgements of similarity in the experiment by Grand et al. (2022). (p. 167) Acquisition of these state spaces is a different problem—learning laboriously from experience—of which we have plausible accounts. Figure 6.1: representations in a high-dimensional state space are arrayed differently when projected onto different semantically-significant dimensions of the underlying space (e.g. size vs. dangerousness).

When proximity in the state space of a trained neural network is used to retrieve relevant information, that is a non-local computation, of a kind that would appear mysterious if we were limited to step-by-step classical computations. (p. 168) Similarity in state space is the resultant of taking account, in parallel, of a large number of parameters at once. An analogy is the way a comet moves in the solar system. At any point it moves based on the resultant of the forces generated in parallel by a very large number of other celestial bodies at once. Another way non-local inferences could occur is illustrated by a computational model of route calculation in the hippocampus that involves a process that propagates in parallel across the whole array of place cells at once.

6.5 How Cognition Partly Avoids and Partially Solves the Frame Problem

(p. 169) I have suggested that human concept-driven thinking relies on special-purpose informational models to generate relevant considerations, using multiple contextual cues to retrieve information according to multiple built-in assumptions of relevance in order to approximate an isotropic search.

This differs from previous theories, first, in placing greater reliance on built-in assumptions of relevance, motivated by new DNN models demonstrating that the if-then way of avoiding the frame problem is more powerful than previously thought. Second, the account of concepts in Chapter 5 shows how conceptual thought can rely on different assumptions of relevance in diverse if-then systems, and integrate their results. (p. 170) The combinatorial power of concepts is important here, each connecting into different assumptions of relevance in special-purpose informational models, in a way not previously emphasised. The picture of retrieval shows that, when memories are not stored as a list of explicit representations, there is no need to search a list to retrieve relevant information (retrieval works by pattern completion or some other dispositional process). (p. 171) Semantic state spaces show how it is possible to perform such retrieval in more than one way, by sampling along multiple different semantically-relevant dimensions.

A further refinement is to use deliberate strategies for searching for relevance, usually acquired socially. Human thinking can move back and forth between contextually-cued recall and step-by-step inference, so as to hit on relevant considerations and reason with them; some hybrid LLMs do the same. (p. 172) Filling in a coherent scenario in the cognitive playground may itself suggest relevant considerations. My picture presents a solution which is partial and imperfect; but so is human cognition. Most of these elements have been discussed before in some guise, but my picture: emphasises the role of concepts as mediators, points to the power of the if-then way of avoiding the problem (while agreeing that this is not on its own a solution), and shows how this tactic can be re-purposed as a way of recalling information to use in reasoning, polling memory in multiple ways so as to approximate an isotropic search, albeit imperfectly.

(p. 173) In this way, the account of concept-driven thinking developed in the first half of the book has an important explanatory payoff: it shows how human cognition can dance around the frame problem, partly avoiding it and partially solving it.

Notes

Footnotes

1

The name comes from an earlier (related) computational problem about updating a scene without having to specify a large number of ‘frame axioms’ about facts that will not change as the result of a given event (Sprevak 2005) (a problem that has largely been solved: Shanahan 2016).

2

‘The joke is that the speaker’s mother is trying to get them to go to their step dad’s poetry reading, but the speaker doesn’t want to go, so they are changing their flight to the day after the poetry reading’ (Chowdhery et al. 2022).

3

Interestingly, Einstein describes this in terms of the ‘free choice of such concepts’, not obstructed by being ‘immediately connected with the empirical material’ (Einstein 1970, p. 49).

4

Although outside the scope of the book, it is worth noting that social processes are also important for generating knowledge in their own right. For example, scientific discovery is a deeply collective process, based on the culture, norms, and institutions of science. That is another way of achieving relevance search. Even if no individual solves the frame problem, if they sample in different ways and transmit information culturally, the social process may approximate a collective isotropic search for relevance.

5

Each sentence of the summary corresponds to one paragraph. Page numbers indicate where the paragraphs begin.

Download all slides

The Frame Problem and the If-Then Problem | Concepts at the Interface (2024)

FAQs

How to solve frame problem in AI? ›

One way to overcome the frame problem is to use a model-based approach. This means that instead of trying to reason about the system as a whole, you create a model of the system. This model can then be used to reason about the effects of changes. Another way to overcome the frame problem is to use a heuristic approach.

What is the frame problem in cog sci? ›

In artificial intelligence, with implications for cognitive science, the frame problem describes an issue with using first-order logic to express facts about a robot in the world.

What is the frame problem McCarthy and Hayes 1969? ›

The frame problem [McCarthy and Hayes, 1969; Shanahan, 1987] refers to the need to represent the “common-sense law of inertia,” that a variable does not change state unless compelled to do so, say by the occurrence of a causally relevant event.

What is the frame problem in classical AI? ›

The frame problem in artificial intelligence (AI) is a challenge that arises when trying to use first-order logic to express facts about a system or environment, particularly in the context of representing the effects of actions. It was first defined by John McCarthy and Patrick J.

What are the 4 main problems AI can solve? ›

What problems can AI help us solve?
  • Automating Repetitive Tasks. ...
  • Data Analysis & Insights. ...
  • Personalization. ...
  • Predictive Maintenance. ...
  • Scientific Discovery and Research. ...
  • Robotics and Automation. ...
  • Drug Discovery and Development. ...
  • Climate Change and Sustainability.
Jul 22, 2024

What is problem outcome frame theory? ›

The Outcome Frame is a way of looking at a problem as a set of choices rather than focusing on why the problem exists and whose fault it is. It organizes our thinking around what we want, how it's possible to achieve it, and the steps we can take to solve the problem.

What is problem frame in design thinking? ›

A well-articulated problem frame is like a compass in the wilderness of innovation, ensuring that every step taken moves to a solution that is both effective and meaningful to the end user. It transforms vague, often misunderstood issues into clearly defined challenges that invite creative thinking and problem-solving.

What is the frame factor theory? ›

Bäckström, Pontus

In the thesis, Frame Factor Theory is employed as a theory of peer effects in classroom instruction. According to the theory, one mechanism generating peer effects is the steering and limiting effect that class composition has on teachers' instruction.

What is frame the problem in machine learning? ›

Framing a machine learning problem involves a series of steps to design and structure the problem, identify the appropriate type of problem, understanding the existing solution, gather the necessary data, define evaluation metrics, decide between online or batch learning, and check any underlying assumption.

What is the biggest problem in artificial intelligence? ›

I look forward to career advancement and participation in solving industry-aligned Artificial Intelligence and Machine Learning problems.
  • Limited Knowledge of AI. ...
  • Building Trust. ...
  • Lack of AI explainability. ...
  • Discrimination. ...
  • High Expectations. ...
  • Implementation strategies. ...
  • Data Confidentiality. ...
  • Software Malfunction.
Aug 1, 2024

What is frame in AI with example? ›

Frames are data structures developed by AI researchers as a means of representing and organizing knowledge. They are similar in concept to objects, which were developed to meet the needs of software engineers. Like object-oriented systems, frame-based systems contain the ideas of classes, instances, and inheritance.

How do you solve frame analysis? ›

Procedure
  1. Determine if the entire structure is independently rigid. ...
  2. Draw a free-body diagram for each of the members in the structure. ...
  3. Write out the equilibrium equations for each free-body diagram.
  4. Solve the equilibrium equations for the unknowns.

How do you solve a particular problem in AI? ›

  1. 1 Define the problem. The first step is to clearly define the problem you want to solve, its scope, its goals, and its constraints. ...
  2. 2 Explore the problem. ...
  3. 3 Generate solutions. ...
  4. 4 Evaluate solutions. ...
  5. 5 Communicate solutions. ...
  6. 6 Learn from solutions. ...
  7. 7 Here's what else to consider.
Sep 8, 2023

How do you frame machine learning problems? ›

Framing a machine learning problem involves a series of steps to design and structure the problem, identify the appropriate type of problem, understanding the existing solution, gather the necessary data, define evaluation metrics, decide between online or batch learning, and check any underlying assumption.

How does the way a problem is frame influence its solution? ›

The way something is framed can influence our certainty that it will bring either gain or loss. This is why we find it attractive when the positive features of an option are highlighted instead of the negative ones. “Heuristics” or mental shortcuts might also play a role.

Top Articles
ESPN8: The Ocho Returns Aug. 5th for Sixth Year; Features Day of Live Events from Rock Hill, SC
Más Ocho: ESPN’s Biggest, Boldest Edition of “ESPN8: The Ocho” Returns with 43 Straight Hours of Seldom-Seen Sports August 3-5
Sdn Md 2023-2024
Knoxville Tennessee White Pages
El Paso Pet Craigslist
Mychart Mercy Lutherville
CHESAPEAKE WV :: Topix, Craigslist Replacement
Encore Atlanta Cheer Competition
Lantana Blocc Compton Crips
Zoebaby222
Epaper Pudari
Olivia Ponton On Pride, Her Collection With AE & Accidentally Coming Out On TikTok
The Rise of Breckie Hill: How She Became a Social Media Star | Entertainment
Moparts Com Forum
Dr. med. Uta Krieg-Oehme - Lesen Sie Erfahrungsberichte und vereinbaren Sie einen Termin
Conscious Cloud Dispensary Photos
Trivago Sf
UPS Store #5038, The
eHerkenning (eID) | KPN Zakelijk
All Obituaries | Gateway-Forest Lawn Funeral Home | Lake City FL funeral home and cremation Lake City FL funeral home and cremation
Troy Gamefarm Prices
Cars & Trucks - By Owner near Kissimmee, FL - craigslist
Lbrands Login Aces
Cfv Mychart
Harrison 911 Cad Log
Craigslist Boerne Tx
Craigs List Jax Fl
Allegheny Clinic Primary Care North
Swgoh Boba Fett Counter
Dreamcargiveaways
Sports Clips Flowood Ms
Adecco Check Stubs
The Wichita Beacon from Wichita, Kansas
404-459-1280
Iban's staff
Pill 44615 Orange
Google Jobs Denver
What Time Is First Light Tomorrow Morning
Winco Money Order Hours
Bianca Belair: Age, Husband, Height & More To Know
Puretalkusa.com/Amac
Sams Gas Price Sanford Fl
Frontier Internet Outage Davenport Fl
Large Pawn Shops Near Me
Keci News
3367164101
Ronnie Mcnu*t Uncensored
Horseneck Beach State Reservation Water Temperature
Bluebird Valuation Appraiser Login
Chitterlings (Chitlins)
Latest Posts
Article information

Author: Lakeisha Bayer VM

Last Updated:

Views: 6403

Rating: 4.9 / 5 (69 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Lakeisha Bayer VM

Birthday: 1997-10-17

Address: Suite 835 34136 Adrian Mountains, Floydton, UT 81036

Phone: +3571527672278

Job: Manufacturing Agent

Hobby: Skimboarding, Photography, Roller skating, Knife making, Paintball, Embroidery, Gunsmithing

Introduction: My name is Lakeisha Bayer VM, I am a brainy, kind, enchanting, healthy, lovely, clean, witty person who loves writing and wants to share my knowledge and understanding with you.