# Sitemap

XML version

<font size="3em"><p class="archive__item-excerpt" itemprop="description"><p> <strong><a href="https://jpskycak.github.io/2013MakingAMatchingLayer/" rel="permalink"> Read more...</a></strong></p></p></font>
<!---->


</article> </div>

–>

## Matrix Exponential and Systems of Linear Differential Equations

In this post, we will learn how to solve systems of linear differential equations. These systems take the form shown below. Read more...

## Generalized Eigenvectors and Jordan Form

As we saw previously, not every matrix is diagonalizable, even when allowing complex eigenvalues/eigenvectors. The matrix below was given as an example of a non-diagonalizable matrix. Read more...

## Recursive Sequence Formulas via Diagonalization

In this post, we introduce an interesting application of matrix diagonalization: constructing closed-form expressions for recursive series. Read more...

## Eigenvalues, Eigenvectors, and Diagonalization

Suppose we want to compute a matrix raised to a large power, i.e. multiplied by itself many times. Read more...

## Inverse Matrices

In this post, we introduce the idea of the inverse of a matrix, which undoes the transformation of that matrix. For example, it’s straightforward that the inverse of the rescaling matrix Read more...

## Rescaling, Shearing, and the Determinant

The key insight in this post is that every square matrix can be decomposed into a product of rescalings and shears. Before we elaborate on that, though, let’s discuss what rescalings and shears are, in terms of matrices. Read more...

## Matrix Multiplication

We have seen how to multiply a vector by a matrix. Now, we will see how to multiply a matrix by another matrix. Whereas multiplying a vector by a matrix corresponds to a linear transformation of that vector, multiplying a matrix by another matrix corresponds to a composition of linear transformations. Read more...

## Linear Systems as Transformations of Vectors by Matrices

Let’s create a compact notation for expressing systems of linear equations like the one shown below. Read more...

## Higher-Order Variation of Parameters

Until this point we’ve been working exclusively with linear systems. However, solving linear systems can sometimes be a necessary component of solving nonlinear systems. For example, recall the variation of parameters method for solving a second-order differential equation of the form Read more...

## Shearing, Cramer’s Rule, and Volume by Reduction

Not only can a nonzero determinant tell us that a linear system has exactly one solution – the nonzero determinant can also help us quickly find that solution through a process known as Cramer’s rule. Read more...

## Volume as the Determinant of a Square Linear System

We have seen that the space of linear equations is actually a vector space, and that the linear equations in any particular system span a subspace of this vector space. However, there is also another way to interpret linear systems in terms of vectors: a linear system can be interpreted as a single vector equation stating that some multiples of particular vectors add up to another particular vector. Read more...

## N-Dimensional Volume Formula

N-dimensional volume generalizes the idea of the space occupied by an object:

• • 1-dimensional volume refers to the space occupied by a 1-dimensional object, such as the length of a line segment.
• • 2-dimensional volume refers to the space occupied by a 2-dimensional object, such as the area of a square.
• • 3-dimensional volume is what we normally mean by the word “volume” -- the amount of space occupied by a 3-dimensional object, such as the volume of a cube.

## Elimination as Vector Reduction

Recall that systems of linear equations can be solved through elimination, multiplying equations by constants and adding equations to each other to cancel variables. For example, to solve the following linear system Read more...

## Span, Subspaces, and Reduction

The span of a set of vectors consists of all vectors that can be made by adding multiples of vectors in the set. For example, the span of the set $\left\lbrace \left< 1,0 \right>, \left< 0,1 \right> \right\rbrace$ is just the entirety of the 2-dimensional plane: any vector $\left< x,y \right>$ in this plane can be made by adding $x \left< 1,0 \right> + y \left< 0,1 \right>$. For example, $\left< 5,-2 \right>$ can be written as $5\left< 1,0 \right> - 2 \left< 0,1 \right>$. Read more...

## Lines and Planes

A line starts at an initial point and proceeds straight in a constant direction. Thus, we can write the equation of a line as Read more...

## Dot Product and Cross Product

We know how to multiply a vector by a scalar, but what does it mean to multiply a vector by another vector? The two most common interpretations of vector multiplication are the dot product, and for vectors in 3 dimensions, the cross product. Read more...

## N-Dimensional Space

N-dimensional space consists of points that have N components. For example, $(0,0)$ is the origin in 2-dimensional space, $(0,0,0)$ is the origin in 3-dimensional space, and $(0,0,0,0)$ is the origin in 4-dimensional space. Similarly, the points $(0,0)$, $(1,0)$, $(0,1)$ are the corners of a triangle in 2-dimensional space, the points $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, $(0,0,1)$ are the corners of a tetrahedron in 3-dimensional space, and the points $(0,0,0,0)$, $(1,0,0,0)$, $(0,1,0,0)$, $(0,0,1,0)$, $(0,0,0,1)$ are the corners of a “hypertetrahedron” in 4-dimensional space. Read more...

## Solving Differential Equations with Taylor Series

Many differential equations don’t have solutions which can be expressed in terms of finite combinations of familiar functions. However, we can often solve for the Taylor series of the solution. Read more...

## Manipulating Taylor Series

To find the Taylor series of complicated functions, it’s often easiest to manipulate the Taylor series of simpler functions, such as those given below. Read more...

## Taylor Series

The sum formula for a geometric series is an example of a representation of a non-polynomial function as an infinite polynomial, within a particular range of inputs. Read more...

## Tests for Convergence

Previously, we saw that sum formulas are only valid for series which converge. But how can we tell whether a series converges or diverges, in the first place? Read more...

## Geometric Series

A geometric series is a sum of the form $r+r^2+r^3+\cdots$ for some number $r$. For example, when $r=\frac{1}{2}$, the corresponding geometric series is $\frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots$. This series might look like it grows bigger and bigger as you add more terms, but there is actually a limit to how big it can get. Read more...

## Variation of Parameters

When we know the zero solutions $y_0$ of a differential equation $y’^\prime+a_1(x)y’+a_0(x)y=f(x)$, we can use a method called variation of parameters to find the particular solution. This method is especially useful in cases where we are unable to guess the particular solution through undetermined coefficients. Read more...

## Integrating Factors

We know how to solve differential equations of the form Read more...

## Undetermined Coefficients

In the previous post, we learned how to solve differential equations of the form Read more...

## Characteristic Polynomial of a Differential Equation

In this post, we learn a technique for solving differential equations of the form Read more...

## Solving Differential Equations by Substitution

Sometimes, non-separable differential equations can be converted into separable differential equations by way of substitution. For example, $y’+y=x$ is a non-separable differential equation as-is. However, we can make a variable substitution $u=x-y$ to turn it into a separable differential equation. Differentiating both sides of $u=x-y$ with respect to $x$, and interpreting $y$ as a function of $x$, we have $u’=1-y’$, so $y’=1-u’$. Substituting, the equation becomes separable and thus solvable in terms of $u$. Read more...

## Slope Fields and Euler Approximation

When faced with a differential equation that we don’t know how to solve, we can sometimes still approximate the solution by simpler methods. If we just want to get an idea of what the solutions of the differential equation look like on a graph, we can construct a slope field. Read more...

## Separation of Variables

In differential equations, we are given an equation in terms of the derivative(s) of some function, and we need to solve for the function which makes the equation true. For example, a simple differential equation is $y’=2x$, and its solution is just the antiderivative $y=x^2+C$. Read more...

## Improper Integrals

Improper integrals are integrals which have bounds or function values which extend to positive or negative infinity. For example, $\int_1^\infty \frac{1}{x^2} \, dx$ is an improper integral because its upper bound is at infinity. Likewise, $\int_0^1 \frac{1}{\sqrt{x}} \, dx$ is an improper integral because $\frac{1}{\sqrt{x}}$ approaches infinity as approaches the lower bound of integration, $0$. Read more...

## Integration by Parts

Integration by parts is another technique for simplifying integrals. We can apply integration by parts whenever an integral would be made simpler by differentiating some expression within the integral, at the cost of anti-differentiating another expression within the integral. The formula for integration by parts is given below: Read more...

## Integration by Substitution

Complicated integrals can sometimes be made simpler through the method of substitution. Substitution involves condensing an expression of $x$ into a single variable, say $u$, and then expressing the integral in terms of $u$ instead of $x$. Read more...

## Finding Area Using Integrals

In the last post, we learned how to evaluate integrals of the form $\int f(x) \, dx$, which are also known as indefinite integrals. In this section, we shall be concerned with definite integrals, which have bounds of integration. The definite integral $\int_a^b f(x) \, dx$ is evaluated by first finding the antiderivative $F(x) = \int f(x) \, dx$, and then computing the difference between the values of the antiderivative at the indicated bounds. Read more...

## Antiderivatives

An antiderivative of a function $f(x)$ is a function $F(x)$ whose derivative is $f(x)$, i.e. $F’(x)=f(x)$. For example, an antiderivative of $2x$ is $x^2$, because $(x^2)’=2x$. Another antiderivative of $2x$ is $x^2+1$, because $(x^2+1)’ = 2x$. To encapsulate all possibilities, we say that the antiderivative of $2x$ is $x^2+C$ where $C$ is a constant. Read more...

## L’Hôpital’s Rule

L’Hôpital’s rule provides a way to evaluate limits that take the indeterminate forms of $\frac{0}{0}$ or $\frac{\infty}{\infty}$. It says that, for such limits, we can differentiate the numerator and denominator separately, without changing the actual value of the limit. Read more...

## Differentials and Approximation

The chain rule tells us that we can treat the derivative $\frac{df}{dx}$ like a fraction when multiplying by other derivatives. In this section, we continue the idea of interpreting the derivative as a fraction, extending it to an even more literal sense. Read more...

## Finding Extrema

Derivatives can be used to find a function’s local extreme values, its peaks and valleys. At its peaks and valleys, a function’s derivative is either $0$ (a smooth, rounded peak/valley) or undefined (a sharp, pointy peak/valley). Read more...

## Derivatives of Non-Polynomial Functions

In this section, we introduce rules for the derivatives of exponential, logarithmic, trigonometric, and inverse trigonometric functions. Although it’s possible to compute each derivative using the difference quotient, it will take a long time to compute derivatives during calculus problems if we have to start from scratch with the difference quotient process every time – so it’s advantageous to remember the derivative rules. The derivative rules are to calculus, what the multiplication table is to arithmetic. Read more...

## Properties of Derivatives

We know that when differentiating polynomials, we can differentiate each term individually. But why are we able to do this? Does multiplication work the same way? What about division? We answer these questions in this section. Read more...

## Chain Rule

The chain rule tells us how to take derivatives of compositions of functions. Informally, it says that we can “forget” about the inside of a function when we take the derivative, as long as we multiply by the derivative of the inside afterwards. For example, to differentiate $(x^2+1)^{100}$, we can use the power rule, as long as we multiply by the derivative of the inside $(x^2+1)$ afterwards. Read more...

## Power Rule for Derivatives

It can be a pain to evaluate the difference quotient every time we want to take the derivative of a function. Luckily, there are some patterns in derivatives that allow us to compute derivatives without having to go through all the steps of computing the limit of the difference quotient. One such pattern is the power rule, which tells us that the derivative of a function $f(x)=x^n$, where $n$ is some constant number, is given by $f’(x)=nx^{n-1}$. Several examples are shown below. Read more...

## Derivatives and the Difference Quotient

The derivative of a function is the function’s slope at a particular point. We can approximate the derivative at a point $(x,f(x))$ by choosing another nearby point on the function, and computing the slope. If we increase the input $x$ by a small amount $\Delta x$, then we reach an x-coordinate of $x+\Delta x$, and the corresponding point on the function is $(x+\Delta x, f(x+\Delta x))$. Read more...

## Limits by Logarithms, Squeeze Theorem, and Euler’s Constant

A useful property of limits is that they can be brought inside continuous functions, i.e. the limit of a continuous function is the function of the limit. For example, $\sqrt{x}$ is a continuous function, so to take the limit of the square root of some expression, we can first find the limit of the expression and then take the square root. Read more...

## Evaluating Limits

The limit of a function $f(x)$, as $x$ approaches some value $a$, is the value we would expect for $f(a)$ if we saw only the portion of the graph around (but not including) $x=a$. If the resulting value is $L$, then we denote the limit as follows: Read more...

## Compositions of Functions

Compositions of functions consist of multiple functions linked together, where the output of one function becomes the input of another function. For example, the function $2x^2$ can be thought of as the composition of two functions: the first function squares the input, and then the second function doubles the input. Read more...

## Inverse Functions

Inverting a function entails reversing the outputs and inputs of the function. For example, if inputting $x=1$ into a function $f$ produces an output $f(1)=3$, then inputting $x=3$ into the inverse function $f^{-1}$ results in the output $f^{-1}(3)=1$. Read more...

## Reflections of Functions

When a function is reflected, it flips across one of the axes to become its mirror image. Read more...

## Rescalings of Functions

When a function is rescaled, it is stretched or compressed along one of the axes, like a slinky. The function’s general shape is preserved, but it might look a bit thinner or fatter afterwards. Read more...

## Shifts of Functions

When a function is shifted, all of its points move vertically and/or horizontally by the same amount. The function’s size and shape are preserved – it is just slid in some direction, like sliding a book across a table. Read more...

## Piecewise Functions

A piecewise function is a function which is pieced together from multiple different functions. For example, the absolute value function is a piecewise function because it consists of the line $y=-x$ for negative $x$, and $y=x$ for positive $x$. Read more...

## Trigonometric Functions

Trigonometric functions are functions which represent the relationship between sides and angles in right triangles. There are three main “trig” functions: sine, cosine, and tangent, and a mnemonic often used to remember what they represent is SohCahToa:

• • The SINE of an angle is the ratio of the lengths of the OPPOSITE side and the HYPOTENUSE.
• • The COSINE of an angle is the ratio of the lengths of the ADJACENT side and the HYPOTENUSE.
• • The TANGENT of an angle is the ratio of the lengths of the OPPOSITE side and the ADJACENT side.

## Absolute Value

An absolute value function represents the magnitude of a number, i.e. its distance from $0$. For example, the absolute value of $-3$ is $3$, and the absolute value of $4$ is $4$. We write this as $|-3|=3$, and $|4|=4$. In effect, absolute value just removes the negative sign from a number, if there is a negative sign to begin with. Read more...

## Exponential and Logarithmic Functions

Exponential functions have variables as exponents, e.g. $f(x)=2^x$. Their end behavior consists of growing without bound to infinity in one direction, and decaying to a horizontal asymptote of $y=0$ in the other direction. The size of the number which is exponentiated, called the base, governs which direction corresponds to which end behavior. Read more...

A radical function is a function that involves roots: square roots, cube roots, or any kind of fractional exponent in general. We can often infer what their graphs look like by sandwiching them between polynomial functions. For example, the radical function $f(x)=\sqrt{x^3}$ can be written as $f(x)=x^{3/2}$, and its exponent $\frac{3}{2}$ is between $1$ and $2$, so the graph of $f$ lies between the graphs of $y=x$ and $y=x^2$. Read more...

## Graphing Rational Functions with Slant and Polynomial Asymptotes

Horizontal asymptotes are horizontal lines which are indicated by a constant whole number term in the proper form of a rational function. Likewise, slant asymptotes are slanted lines which are indicated by a linear term in the proper form of a rational function. For example, the proper form of $r(x)=\frac{2x^2-3x}{x-1}$ is given by $r(x)=2x-1-\frac{1}{x-1}$, which has $2x-1$ as its whole number term. As a result, $r$ has a slant asymptote at $y=2x-1$, which appears in the graph of $r$ below. Read more...

## Graphing Rational Functions with Horizontal and Vertical Asymptotes

The horizontal and vertical asymptotes of a rational function can give us insight into the shape of its graph. For example, consider the function $r(x)=\frac{3x}{x-1}$, which has a horizontal asymptote $y=3$ and a vertical asymptote $x=1$. Read more...

## Vertical Asymptotes of Rational Functions

Unlike polynomials, rational functions can “blow up” to positive or negative infinity even for relatively small input values. Such input values are called vertical asymptotes, because they represent vertical lines that the function approaches but never quite reaches. Read more...

## Horizontal Asymptotes of Rational Functions

Like polynomials, rational functions can have end behavior that goes to positive or negative infinity. However, rational functions can also have another form of end behavior in which they become flat, approaching (but never quite reaching) a horizontal line known as a horizontal asymptote. Read more...

## Polynomial Long Division

A rational function is a fraction whose numerator and denominator are both polynomials. Rational functions are usually written in proper form, where the numerator is of a smaller degree than the denominator. (The degree of a polynomial is its highest exponent.) Read more...

## Sketching Graphs of Polynomials

In the previous posts, we learned how to find end behavior, zeros, and factored forms of polynomials. In this post, we will put all this information together to sketch graphs of polynomials. Read more...

## Rational Roots and Synthetic Division

In the previous post, we learned how to find the remaining zeros of a polynomial if we are given some zeros to start with. But how do we get those initial zeros in the first place, if they’re not given to us and aren’t obvious from the equation? Read more...

## Zeros of Polynomials

The zeros of a polynomial are the inputs which cause it to evaluate to zero. For example, a zero of the polynomial $x^3-2x^2-5x+6$ is $x=1$ because $(1)^3-2(1)^2-5(1)+6=0$. Another zero is $x=-2$ because $(-2)^3-2(-2)^2-5(-2)=0$. Can you find the rest? Read more...

## Standard Form and End Behavior of Polynomials

Polynomials include linear expressions and quadratic expressions, as well as expressions adding multiples of higher exponents of the variable. Rational functions are usually written in proper form, where the numerator is of a smaller degree than the denominator. (The degree of a polynomial is its highest exponent.) Read more...

## Systems of Inequalities

To solve a system of inequalities, we need to solve each individual inequality and find where all their solutions overlap. For example, to solve the system Read more...

Quadratic inequalities are best visualized in the plane. For example, to solve a quadratic inequality $-x^2+x+2>0$, we can find the values of $x$ where the parabola $y=-x^2+x+2$ is positive. Since $y=-x^2+x+2$ is a downward parabola, the solution consists of the values of $x$ in its midsection which arches over the x-axis. That is, the solution consists of all x-values between the solutions to $-x^2+x+2=0$. This quadratic equation factors to $-(x-2)(x+1)=0$, so the parabola’s midsection is given by $-1<x<2$, or $(-1,2)$ in interval notation. Read more...

## Linear Inequalities in the Plane

When a linear equation has one variable, the solution covers a section of the number line: if our solution is $x > \text{some number}$, then the solution covers the section of the number line that lies right of that number; if our solution is $x<\text{some number}$, then the solution covers the section of the number line that lies left of that number. If equality is allowed (i.e. $\geq$ or $\leq$), then we use a closed circle to indicate that the circled number is itself a solution; otherwise, if equality is not allowed (i.e. $>$ or $<$), then we use an open circle. Read more...

## Linear Inequalities in the Number Line

An inequality is similar to an equation, but instead of saying two quantities are equal, it says that one quantity is greater than or less than another. For example, since $1$ is greater than $0$, we write $1>0$. Likewise, since $0$ is less than $1$, we write $0<1$. If we write $x>0$, then we mean that $x$ can be $1$, $2$, $\pi$, $0.000001$, or any other positive number. If we write $x<0$, then we mean that $x$ can be $-1$, $-2$, $-\pi$, $-0.000001$, or any other negative number. Read more...

Systems of quadratic equations can be solved via substitution. After substituting, the resulting equation can itself be reduced down to a quadratic equation and solved by techniques covered in this chapter. Read more...

## Vertex Form

To easily graph a quadratic equation, we can convert it to vertex form: Read more...

## Completing the Square

Completing the square is another method for solving quadratic equations. Although we can use the quadratic formula to solve any quadratic equation, completing the square helps us gain a better intuition for quadratic equations and understand where the quadratic formula comes from. As we will see in the next section, completing the square will also help us rearrange quadratic equations into forms that are easy to graph. Read more...

Some quadratic equations cannot be factored easily. For example, in the equation $x^2+3x+1=0$, we need to find two factors of $1$ that add to $3$. But the only integer factors of $1$ are $1$ and $1$, and they definitely don’t add to $3$! To solve these hard-to-factor quadratic equations, it’s easiest to use the quadratic formula given below, which tells us explicitly how to compute the solutions of a quadratic equation $ax^2+bx+c=0$. Read more...

Factoring is a method for solving quadratic equations. It involves converting the quadratic equation to standard form, then factoring it into a product of two linear terms (which are called factors), and finally solving for the variable values that make either factor equal to $0$. Read more...

## Standard Form of a Quadratic Equation

Quadratic equations are similar to linear equations, except that they contain squares of a single variable. Read more...

## Linear Systems

A linear system consists of multiple linear equations, and the solution of a linear system consists of the pairs which satisfy all of the equations. For example, the solution to the linear system Read more...

## Standard Form of a Line

The standard form of a linear equation is $ax+by=c$, where $a$, $b$, and $c$ are all integers and $a$ is nonnegative. For example, we can convert the equation $y=\frac{3}{5}x+\frac{10}{3}$ to standard form by moving $x$ and $y$ to the same side and multiplying to cancel out any fractions. Read more...

## Point-Slope Form

Suppose we want to write the equation of a line with a given slope $m=2$, through a particular point $(3,5)$. In the previous post, we substituted the given information into a slope-intercept equation form $y=mx+b$, solved for $b$, and rewrote the slope-intercept form with $m$ and $b$ substituted so that $x$ and $y$ were the only variables. Read more...

## Slope-Intercept Form

In the previous post, we were solving linear equations in one variable. Now, let’s consider linear equations in two variables. A few examples are shown below: Read more...

## Solving Linear Equations

Loosely speaking, a linear equation is an equality statement containing only addition, subtraction, multiplication, and division. It does not need to include all of these operations, but it cannot include operations beyond them, such as exponentiation. Read more...