Cross-posted from here.
Students have the tendency to just try to learn the examples and skip the concepts.
Is this the nature of human learning, or is it a learning technique that is encouraged by the way we test them (e.g., solve an example involving a piecewise-defined function, then test them by giving another piecewise-defined function)?
I’m not familiar with the literature, but I do know that “math people” tend to severely overestimate the degree to which the rest of the population reasons from first principles (“deductive” reasoning). Most people exclusively reason by analogy (“inductive” reasoning):
- Is $f(x)$ continuous? Well, based on the examples I've seen, continuous functions look "normal" and discontinuous functions look "weird." In this case $f(x)$ looks more normal than weird, so $f(x)$ is probably continuous.
Side Note: Students who exclusively reason by analogy generally don’t see it as problematic that they don’t know how to proceed in the absence of similar examples. When that happens, they view it as a “data” problem, not a “model” problem, so they “fix” the problem by obtaining a similar example from someone/something that seems trustworthy and adding it to their dataset of examples for future decisions.
This isn’t just in math. It’s across life in general. Consider stereotypes:
- Is X true about thing Z? Well, based on the examples I've seen, thing Z seems similar to the type of things for which X is true. So X is probably true about thing Z.
To me, this would suggest that overreliance on analogical reasoning has natural origins (even if the specific decision rule for a particular case of analogical reasoning might be learned from other people).
Of course, this is not to claim that analogical reasoning should be avoided entirely. Analogical reasoning generally does a good job of supplying you with a quick decision that is probably correct (or at least more likely correct than incorrect). In most situations in life that’s all you need.
Paraphrasing from here, even professional mathematicians have heuristics and patterns that they are confident they can recursively reduce to first principles. For instance, a typical professional mathematician doesn’t determine if a function is continuous by doing an epsilon-delta test. Given $\sin(x)+\cos(x),$ they will know it is continuous without thinking; and if asked to verify, it is the sum of two continuous functions (again, not epsilon-delta). While they can probably generate an epsilon-delta proof if requested, very few mathematicians will do so when answering the question of whether $\sin(x)+\cos(x)$ is continuous.
There’s a difference between using analogical reasoning and over-relying on analogical reasoning. When you use analogical reasoning, you need to have a sense of its confidence (i.e. probability of correctness) and how that measures up against the stakes of the situation you’re in. If the confidence does not measure up against the stakes, then you need to fall back to first-principles reasoning (provided that you have enough time to do so). People who over-rely on analogical reasoning do not do this.
Lastly, some hardline pure mathematicians might argue that the entire point of math classes is training in deductive reasoning, so inductive reasoning has no place. While that may be at least partially true from the perspective of some pure math classes, especially those really niche classes focused on the axiomatic foundations of math, this answer interprets math in its most general sense, and in most areas under the gigantic umbrella of math, inductive reasoning has some place. For instance, throughout applied math there’s a saying “all models are wrong but some are useful.” And in machine learning specifically (an extreme case for illustrative purposes), the whole point is to use math to build automated systems that use inductive reasoning to make decisions that are correct enough to be useful.