According to the author, in his Quantum Mechanics course, students find it is helpful to see a combination of Erwin Schrodinger's wave mechanics approach and matrix mechanics approach of Werner Heisenberg, as well as Paul Dirac's bra and ket.

**1.2 Dirac Notation**

A quantum wavefunction can be expressed as a weighted combination of basis wavefunctions. A generalized version of the dot or scalar product called inner product can be used to calculate how much each component wavefunction contributes the the sum, and this determines the probability of various measurement outcomes,

The notation was developed by paul direct in 1938, while he was working with a generalized version of dot or scalar product called the **inner product **, written as ⟨A∣B⟩. In this context "generalized" means the inner product can be used with higher-dimensional abstract vectors with complex components. Direc realized that the inner product bracket ⟨A∣B⟩ could be conceptually diveded into two pieces, a left half which he called a "**bra**" and a right half which he called a "**ket**". In conventional notation, an inner product between vectors 𝐴̄ and 𝛣̄ might be written as 𝐴̄ ⋅ 𝛣̄ or (𝐴̄, 𝛣̄), but in Direc notation the inner product is written as

(1.12) Inner product of ∣𝐴⟩ and ∣𝛣⟩ = ⟨𝐴∣ times ∣𝛣⟩ = ⟨𝐴∣𝛣⟩.

(1.13) ∣𝐴⟩ = _{column matrix}(𝐴_{x} 𝐴_{y} 𝐴_{z}) .

(1.14) ⟨𝐴∣ = (𝐴^{*}_{x} 𝐴^{*}_{y} 𝐴^{*}_{z}),

where the components with superscript of * are complex conjugates, which we will see as numerical expression later. The inner product ⟨𝐴∣𝛣⟩ is

thus

(1.15) ⟨𝐴∣ times ∣𝛣⟩ = ⟨𝐴∣𝛣⟩ = (𝐴^{*}_{x} 𝐴^{*}_{y} 𝐴^{*}_{z}) _{column matrix}(𝛣_{x} 𝛣_{y} 𝛣_{z}).

(1.16) ⟨𝐴∣𝛣⟩ = (𝐴^{*}_{x} 𝐴^{*}_{y} 𝐴^{*}_{z}) _{column matrix}(𝛣_{x} 𝛣_{y} 𝛣_{z}) = 𝐴^{*}_{x}𝛣_{x} + 𝐴^{*}_{y}𝛣_{y} + 𝐴^{*}_{z}𝛣_{z}.

Since we will be dealing with generalized vectors the components 𝑥, 𝑦, 𝑧, instead of using the Cartesian units vectors 𝑖̄, 𝑗̄, 𝑘̄, we will use, 𝜖_{1}̄,𝜖_{2}̄...𝜖_{𝑁}̄. So the equation

(1.17-18) ∣𝐴⟩ = 𝐴_{x}∣𝑖⟩ + 𝐴_{y}∣𝑗⟩ + 𝐴_{z}∣𝑘⟩ becomes ∣𝐴⟩ = 𝐴_{1}∣𝜖_{1}⟩ + 𝐴_{2}∣𝜖_{2}⟩ + ∙∙∙ 𝐴_{𝑁}∣𝜖⟩ = ∑_{𝑖=1}^{𝑁}𝐴_{𝑖}∣𝜖_{𝑖}⟩.

On the other hand a bra is a "linear functional" (also called a "**covector**" or a "one-form") that combines with a a ket to produce a scalar; mathematicians say bras map vectors to the field of scalars. A linear functional is essentially a mathematical device (or an instruction) that operates on another object. Hence a bra operates on a ket, and the result is a scalar. Bras don't inhabit the same vector space as kets - they live their own vector space that calle the "**dual space** to the space of kets. Within thair space bras can be added together and multiplied by scalars to produce new bras.

One reason that the space is called "dual" to the space of ket is that for every ket there is a corresponding bra, and when a bra operates on its corresponding (dual) ket, the scalar result is the square of the norm of the ket.

⟨𝐴∣𝐴⟩ = (𝐴^{*}_{1} 𝐴^{*}_{2} . . . 𝐴^{*}_{𝑁}) _{column mattrix}(𝐴_{1} 𝐴_{2} . . . 𝐴_{𝑁}) = ∣𝐴̄∣^{2}

The bra ⟨𝐴∣ corresponds to the ket ∣𝐴⟩, but the components of bra ⟨𝐴∣ are the complex conjugates, which we will see the concrete expression in the section (1.4), of ∣𝐴⟩. So a bra is a device for turning a vector (ket) into a scalar. As a result, a bra ⟨𝐴∣ which corresponding to ∣𝐴⟩, with a ket ∣𝛣⟩ produce a scalar as the equation (1.16).

**1.3 Abstract Vectors and Functions**

To understand the use of bras and kets in quantum mechanics, it's necessary to generalize the concept s of vector components and basis vector to function. Instead of attempting to replicate three-dimensional physical space as in **Fig. 1.4a**, simply line up the vector components along the horizontal axis of a two-dimensional graph, with the vertical axis representing the amplitude of the components, as in **Fig. 1.4b**.

Because higher-dimensional **abstract spaces** turn out to be very useful tools for solving problems in several areas of physics, including classical and quantum mechanics. These spaces are called "abstract" because they are nonphysical - that is, their dimensions don't represent the physical dimensions of the universe we inhabits.

Now imagine drawing a set of axes in an abstract space and marking each axis with value of a parameter. Thus we can consider the graphs of the components of an N-dimensional vector. The multidimensional space most useful in quantum mechanics is an abstract vector space called "**Hilbert space**" after the German mathematician David Hilbert.

To understand the characteristics of Hilbert space, imagine we are dealing with a vector with an extremely large number of components, the components may be treated as a continuous function rather than a set of discrete values. Then the function (call it "𝑓") is depicted as the curvy line connecting the tips of the vector components. Then we can add and multiply by scalar functions. So two functions 𝑓(𝑥) and 𝑔(𝑥) added at every 𝑥 and the original function 𝑓(𝑥) times scalar multiplier results in a value at every 𝑥.

Finally, the inner product between two functions 𝑓(𝑥) and 𝑔(𝑥) is as follows

(1.19) ⟨𝑓(𝑥)∣𝑔(𝑥)⟩ = ∫ _{-∞}^{∞} 𝑓^{*}(𝑥)𝑔(𝑥) 𝑑𝑥,

where 𝑓^{*}(𝑥) represents the complex conjugate as in Eq. (1.16). The reason for taking the complex conjugate is explained in the next section.

There is one more condition that must be satisfied before we can call that vector space a Hibert space. That condition is that the functions must have a finite norm

(1.20) ∣𝑓(𝑥)∣^{2} = ⟨𝑓(𝑥)∣𝑓(𝑥)⟩ = ∫ _{-∞}^{∞} 𝑓^{*}(𝑥)𝑓(𝑥) 𝑑𝑥 < ∞.

Such functions are said to be "square summable" or "square integrable" in Hilbert space. Thus continuous functions have generalized "length" and "direction" and obey the rules of vector addition and scalar multiplication , and inner product. Hilbert space is a collection of such functions that also have finite norm.

**1.4 Complex Numbers, Vectors, and Functions**

One important difference between the Schrodinger equation and the classical wave equation is the presence of the **imaginary unit** "𝑖" (the square root of minus one). So this section contains a short review of **complex numbers** and their use in the context of vector comonents and Dirac notation.

The most general way of representing a complex quantity 𝑧 is

(1.21) 𝑧 = 𝑥 + 𝑖𝑦,
where 𝑥 is the real part, 𝑖 = √(-1) and 𝑦 is the imaginary part.

Imaginary numbers lie along a different number line. The number line is perpendicular to the real number line, and a two-dimensional plot of a both number lines represent the "complex plane" shown in **Fig. 1.7**.

The magnitude of a complex number is

(1.22) ∣𝑧∣^{2} = 𝑥^{2} + 𝑦^{2}.

To find the magnitude of a complex number, it's necessary to multiply the quantity not by itself, but by its complex conjugate. The complex conjugate for 𝑧 = 𝑥 + 𝑖𝑦 is

(1.24) 𝑧^{*} = 𝑥 - 𝑖𝑦,

which is ususally indicated by an asterisk. So we can obtain the magnitude as follows

(1.25) ∣𝑧∣^{2} = 𝑧 ⨯ 𝑧^{*} = (𝑥 + 𝑖𝑦) ⨯ (𝑥 - 𝑖𝑦) = 𝑥^{2} - 𝑥𝑖𝑦 + 𝑖𝑦𝑥 + 𝑦^{2} = 𝑥^{2} + 𝑦^{2}.

And since the magnitude (or norm) of a vector 𝐴̄ can be found by taking the square root of the inner product of vector itself, the complex conjugate is built into the process of taking the inner product between complex quantities:

(1.26) ∣𝐴∣ = √(𝐴̄ ⋅ 𝐴̄) = √(𝐴^{*}_{x}𝐴_{x} + 𝐴^{*}_{y}𝐴_{y} + 𝐴^{*}_{z}𝐴_{z}) = √(∑_{𝑖=1}^{𝑁}𝐴^{*}_{𝑖} 𝐴_{𝑖}).

This also applies to complex functions:

(1.27) ∣𝑓(𝑥)∣ = √(⟨𝑓(𝑥)∣𝑓(𝑥)⟩) = √(∫_{-∞}^{∞}𝑓(𝑥)^{*}𝑓(𝑥) 𝑑𝑥).

If the inner product involves two different vectors or functions by convention the complex conjugate is taken of the *first* membrs of the pair:

(1.28) 𝐴̄ ⋅ 𝐵̄ = ∑_{𝑖=1}^{𝑁}𝐴^{*}_{𝑖} 𝐵_{𝑖} ⟨𝑓(𝑥)∣𝑔(𝑥)⟩ = ∫_{-∞}^{∞}𝑓(𝑥)^{*}𝑔(𝑥) 𝑑𝑥.

The requirement to take the complex conjugate of one member of the inner product for complex vectors means that the order matters. So 𝐴̄ ⋅ 𝐵̄ is not the same as 𝐵̄ ⋅ 𝐴̄. That's because

(1.29) 𝐴̄ ⋅ 𝐵̄ = ∑_{𝑖=1}^{𝑁}𝐴^{*}_{𝑖} 𝐵_{𝑖} = ∑_{𝑖=1}^{𝑁}(𝐴_{𝑖} 𝐵^{*}_{𝑖})^{*} = ∑_{𝑖=1}^{𝑁}(𝐵^{*}_{𝑖} 𝐴_{𝑖})^{*} = (𝐵̄ ⋅ 𝐴̄)^{*}

⟨𝑓(𝑥)∣𝑔(𝑥)⟩ = ∫_{-∞}^{∞}𝑓(𝑥)^{*}𝑔(𝑥) 𝑑𝑥 = ∫_{-∞}^{∞}[𝑔(𝑥)^{*}𝑓(𝑥)]^{*} 𝑑𝑥 = (⟨𝑔(𝑥)∣𝑓(𝑥)⟩)^{*}

**1.5 Orthogonal Functions**

For vectors, the concept of orthogonality is straightforward: two vectors are orthogonal if their scalar product is zero. Similar considerations apply to 𝑁-dimensonal vectors as well as continuous functions as shown in **Fig. 1.9**. So if the abstract vectors 𝐴̄ and 𝐵̄ are orthogonal, their inner product (𝐴̄, 𝐵̄) must equal zero:

(𝐴̄, 𝐵̄) = ∑_{𝑖=1}^{𝑁}𝐴^{*}_{𝑖} 𝐵_{𝑖} = 0.

⟨𝑓(𝑥)∣𝑔(𝑥)⟩ = ∫_{-∞}^{∞}𝑓(𝑥)^{*}𝑔(𝑥) 𝑑𝑥 = 0.

As we will see in section 2.5, orthogonal basis functions plays an important role in determining the possible outcome.

**1.6 Finding Components Using the Inner Product**

The components of vectors can be written as the scalar product of each unit vector with the vector:

(1.30) 𝐴_{x} = 𝑖̄ ⋅ 𝐴̄ 𝐴_{y} = 𝑗̄ ⋅ 𝐴̄ 𝐴_{z} = 𝑘̄ ⋅ 𝐴̄

which can be concisely written as

(1.31) 𝐴_{𝑖} = 𝜖̄_{𝑖} ⋅ 𝐴̄ 𝑖 = 1, 2, 3,

in which 𝜖̄_{1} represents 𝑖̄, 𝜖̄_{2} represents 𝑗̄, and 𝜖̄_{3} represents 𝑘̄.

This can be generalized to find the components of an 𝑁-dimensional abstract vectors represented by the ket ∣𝐴⟩ in a basis system with orthogonal basis vectors 𝜖̄_{1}, 𝜖̄_{2}, . . . 𝜖̄_{𝑁}:

(1.32) 𝐴_{𝑖} = 𝜖̄_{𝑖} ⋅ 𝐴̄/∣𝜖̄_{𝑖}∣^{2} = ⟨𝜖̄_{𝑖}∣𝐴⟩/⟨𝜖̄_{𝑖}∣𝜖̄_{𝑖}⟩.

Notice that the basis vectors in this case are orthogonal but they don't necessarily have unit length. The process of dividing by the square of the norm of a vector or function is called "**normalization**", and orthongonal vectors or functions that have a length of one unit are "orthonorrmal". The condition of orthonormality for basis vectors 𝜖̄ is often written as

(1.34) 𝜖̄_{𝑖} ⋅ 𝜖̄_{𝑗} = ⟨𝜖̄_{𝑖}∣𝜖̄_{𝑗}⟩ = 𝛿_{𝑖𝑗},

in which 𝛿_{𝑖𝑗} represents the **Knonecker delta**, which has a value of one if 𝑖 = 𝑗 or zero if 𝑖 ≠ 𝑗.

The expansion of a vector as the weighted combination of a set of basis vectors as the weighted combination od a set of basis vectors and use of normalized scalar product to find the vector's components for a specified basis can be extended to the function of Hilbert space. So the expansion of function ∣𝜓⟩ using basis functions ∣𝜓_{𝑛}⟩ is

(1.35) ∣𝜓⟩ = 𝑐_{1}∣𝜓_{1}⟩ + 𝑐_{1}∣𝜓_{1}⟩ + ⋅ ⋅ ⋅ + 𝑐_{𝑁}∣𝜓_{𝑁}⟩ = ∑_{𝑛=1}^{𝑁}𝑐_{𝑛}∣𝜓_{𝑛}⟩,

in which 𝑐_{𝑛} tells you the "amount" of basis function ∣𝜓_{𝑛}⟩. As long as the basis functions ∣𝜓_{1}⟩, ∣𝜓_{2}⟩ . . . ∣𝜓_{𝑁}⟩ are orthogonal, the components 𝑐_{1}, 𝑐_{2 } . . . 𝑐_{𝑁} can be found using the noralized inner product:

𝑐_{1} = ⟨𝜓_{1}∣𝜓⟩/⟨𝜓_{1}∣𝜓_{1}⟩ = ∫_{-∞}^{∞}𝜓^{*}_{1}(𝑥)𝜓(𝑥) 𝑑𝑥/[∫_{-∞}^{∞}𝜓^{*}_{1}(𝑥)𝜓_{1}(𝑥) 𝑑𝑥]

(1.36) 𝑐_{2} = ⟨𝜓_{2}∣𝜓⟩/⟨𝜓_{2}∣𝜓_{2}⟩ = ∫_{-∞}^{∞}𝜓^{*}_{2}(𝑥)𝜓(𝑥) 𝑑𝑥/[∫_{-∞}^{∞}𝜓^{*}_{2}(𝑥)𝜓_{2}(𝑥) 𝑑𝑥]

𝑐_{𝑁} = ⟨𝜓_{𝑁}∣𝜓⟩/⟨𝜓_{𝑁}∣𝜓_{𝑁}⟩ = ∫_{-∞}^{∞}𝜓^{*}_{𝑁}(𝑥)𝜓(𝑥) 𝑑𝑥/[∫_{-∞}^{∞}𝜓^{*}_{𝑁}(𝑥)𝜓_{𝑁}(𝑥) 𝑑𝑥].

This approach to finding the components of a function (using sinusoidal basis functions) was pioneered by the French mathematician and physicist Jean-Baptiste Fourier (1768-1830). In quantum mechanics texts, this process is sometimes called "spectral decomposition", since the weighing coefficients (𝑐_{𝑛}) are called the "spectrum" of a function.

**2. Operators and Eigenfunctions**

In quantum mechanics, every physical observable is associated with a linear "operator" that can be used to determine possible measurement and their probabilities for a given quantum state.

**2.1 Operators, Eigenvectors, and Eigenfunctions**

The operators in quantum mechanics are called "linear" because applying them to a sum of vectors or functions gives the same result as applying them to the individual vectors or functions and then summing the results. So if O is a linear operator (with a caret hat in the most common in quantum texts) and 𝑓_{1} and 𝑓_{2} are functions, then

(2.1) O(𝑓_{1} + 𝑓_{2}) = O(𝑓_{1}) + O(𝑓_{2}).

Linear operators also have the property that multiplying a function by a scalar and then applying the operator gives the same result as first applying the operator and then multiplying the result by the scalar. So if O is a linear operator, 𝑐 is a (potentially complex) scalar and 𝑓 is a function, then

(2.2) O(𝑐𝑓) = 𝑐O(𝑓),

To understand the operators used in quantum mechanics, we will consider an operator as a square matrix multiplied by a vector like this

(2.3) _{matrix}𝑅𝐴̄ = _{matrix}[_{row-1}(𝑅_{11} 𝑅_{12}) _{row-2}(𝑅_{21} 𝑅_{22})] _{column matrix}(𝐴_{1} 𝐴_{2}) = _{column matrix}[(𝑅_{11}𝐴_{1} + 𝑅_{12}𝐴_{2}) (𝑅_{21}𝐴_{1} + 𝑅_{22}𝐴_{2})]

Consider, for example

_{matrix}𝑅 = _{matrix}[_{row-1}(4 -2) _{row-2}(-2 4]

and the vector 𝐴̄ = 𝑖̄ + 3𝑗̄. Writing the components of 𝐴̄ as a column vector and multiplying gives

(2.4) _{matrix}𝑅𝐴̄ = _{matrix}[_{row-1}(4 -2) _{row-2}(-2 4] _{column matrix}(1 3) = = _{column matrix}[(4)(1) + (-2)(3) (-2)(1) + (4)(3)] = _{column matrix}(-2 10)

So the operation of 𝑅_{matrix} on vector 𝐴̄ produces another vector that has a different length and points in a different direction.

Now consider the effect of 𝑅_{matrix} on a different vector - for example 𝐵̄ = 𝑖̄ + 𝑗̄ shown in **Fig. 2.2a**.

In this case, the multoplication looks like this:

(2.5) _{matrix}𝑅𝐵̄ = _{matrix}[_{row-1}(4 -2) _{row-2}(-2 4] _{column matrix}(1 1) = _{column matrix}[(4)(1) + (-2)(1) (-2)(1) + (4)(1)] = _{column mattrix}(2 2)

= 2 _{column matrix}(1 1) = 2𝐵̄.

A vector for which the direction is not changed after multiplication by a matrix is called an "eigenvector" of that matrix, and the factor by which the length of the vector is scaled is called the "eigenvalue" for that eigenvector. So as shown **Fig. 2.2b** the vector 𝐵̄ = 𝑖̄ + 𝑗̄ is an eigenvector of 𝑅_{matrix} with eigenvalue of 2.

Eq. (2.5) is an example of an "eigenvalue equation"; the general form is

(2.6) 𝑅_{matrix}𝐴̄ = 𝜆𝐴̄,

where 𝐴̄ represents an eigenvector of 𝑅_{matrix} with eigenvalue 𝜆.

The procedure of determining the eigenvalues and eigenvectors of a matrix is not difficult and now we omit it here. If we work through that process for the 𝑅_{matrix} we have the vector 𝐶̄ = 𝑖̄ - 𝑗̄ is also an eigenvector of 𝑅_{matrix}, it's eigenvalue is 6.

It is worth noting that the sum of the eigenvalue of a matrix is equal to the trace of the matrix (which is 8 in this case) and the product of the eigenvalue is equal to the determinant of the matrix (which is 12 in this case).

There are mathematical processes that act as operators on functions to producenew functions and if the new function is a scalar multiple of the original function that function is called an "eigenfunction" of the operator. The eigenfunction operation corresponding to the eigenvector equation (Eq. 2.6) is

(2.7) O𝜓 = 𝜆𝜓
in which 𝜓 represents an eigenfunction of operator Ộ with eigenvalue 𝜆.

For example, the second-derivative operator Ĝ^{2} = 𝑑^{2}/𝑑𝑥^{2}:

(2.9) Ĝ^{2}𝑓(𝑥) = 𝑑^{2}(sin 𝑘𝑥)/𝑑𝑥^{2} = -𝑘^{2}sin 𝑘𝑥 = 𝜆(sin 𝑘𝑥).

That means that sin 𝑘𝑥 is an eigenfunction of the second-drivative operator Ĝ^{2} = 𝑑^{2}/𝑑𝑥^{2}, and the eigenvalue for this eigenfunction is 𝜆 = -𝑘^{2}.

In quantum mechanics the eigenvalues for those eigenfunction represent possible outcomes of measurements of that observable.

**2.2 Operators in Dirac Notations**

It's helpful to become familiar with the way operators fit into Dirac notation. Using that notation makes the general eiegenvalue equation look like this:

(2.10) O ∣𝜓⟩ = 𝜆 ∣𝜓⟩

in which ket ∣𝜓⟩ is called an "eigenket" of operator O.

Now consider what happen when you form the inner product of ket ∣𝜙⟩ with both sides of this equation:

(∣𝜙⟩ , O ∣𝜓⟩) = (∣𝜙⟩ , 𝜆 ∣𝜓⟩).

Because the first members of the inner product becomes a bra

(2.11) ⟨𝜙∣ O ∣𝜓⟩ = ⟨𝜙∣ 𝜆 ∣𝜓⟩.

Expressions like this with an operator "sandwiched" between a bra and a ket, are extremely common (and useful) in quantum mechanichs.

Just as a matrix operating on a column vector gives another colume vector, letting operator O work on ket ∣𝜓⟩ gives another ket, which we'll call ∣𝜓'⟩:

(2.12,13) O ∣𝜓⟩ = ∣𝜓'⟩. ⟨𝜙∣ O ∣𝜓⟩ = ⟨𝜙∣𝜓'⟩.

This inner product is proportional to the projection of ket ∣𝜓'⟩ onto the direction of ket ∣𝜙⟩, and that projection is a scalar.

To see how that works, consider an operator Ậ, which can be represented as a 2 ⨯ 2 matrix:

_{matrix}𝐴 = _{matrix}[_{row-1}(𝐴_{11} 𝐴_{12}) _{row-1}(𝐴_{21} 𝐴_{22})]

in which the elements, 𝐴_{11}, 𝐴_{12}, 𝐴_{21} and 𝐴_{22} (collectively referred to as 𝐴_{𝑖𝑗}) depend on the basis system. For example, applying operator Ậ to each of the orthonormal basis vector 𝜖_{1} and 𝜖_{2} represented by kets ∣𝜖_{1}⟩ and ∣𝜖_{2}⟩, the matrix elements determine the "amount" of each basis vector in the result:

(2.14) __Ậ ∣𝜖 _{1}⟩ = 𝐴_{11}∣𝜖_{1}⟩ + 𝐴_{21}∣𝜖_{2}⟩__

Notice that it's the columns of Ẫ that determine the amount of each basis vector. Now take the inner product of the first of these equations with the first basis ket ∣𝜖

⟨𝜖

Taking the inner product of the second equation in (2.14) with the first basis ket ∣𝜖

⟨𝜖

Forming the inner products of both equations in (2.14) with the second basis ket ∣𝜖

(2.15)

which can be written concisely as

(2.16) 𝐴

Here's an example. Consider the operator in the Cartesian coordinate system given by

𝑅

Thus

Here the basis vectors 𝜖̄

One additional bit of operator mathematics "commutation" will be to encounter in quantum mechanics. Two operators Ĝ and Ĥ are said to "commute" if the order of their application can be switched without changing the result. This can be written as

(2.17) Ĝ(Ĥ ∣𝜓⟩) = Ĥ(Ĝ ∣𝜓⟩) if Ĝ and Ĥ commute

or

Ĝ(Ĥ ∣𝜓⟩) - Ĥ(Ĝ ∣𝜓⟩) = 0 (ĜĤ - ĤĜ) ∣𝜓⟩) = 0.

The quantity in parenthesis (ĜĤ - ĤĜ) is called the commutator of operators Ĝ and Ĥ and is commonly written as

(2.18) [Ĝ, Ĥ] = ĜĤ - ĤĜ.

So the bigger the change in the result caused by switching the order of operation, the bigger the commutator.

The Heisenberg Uncertainty Principle limits the precision with which two observable whose operators do not commute may be simultaneously known.

**2.3 Hermitian Operators**

An important characteristic of quantum operators may be understood by considering both sides of Eq. 2.11:

(2.11) ⟨𝜙∣ O ∣𝜓⟩ = ⟨𝜙∣ 𝜆 ∣𝜓⟩.

in which ∣𝜙⟩ and ∣𝜓⟩ represent quantum wavefunctions.

Since the constant 𝜆 is outside both bra ⟨𝜙∣ and ket ∣𝜓⟩, the right side of (2.11) can be written as

(2.19) ⟨𝜙∣ 𝜆 ∣𝜓⟩ = ⟨𝜙∣𝜓⟩ 𝜆 = 𝜆 ⟨𝜙∣𝜓⟩.

The left side of (2.11) contains some interesting and useful concepts. We can think of this expression in ether of two ways.

One is

⟨𝜙∣ → O ∣𝜓⟩.

Alternatively, you can view Eq. 2.11 like this:

⟨𝜙∣ O → ∣𝜓⟩

in which bra ⟨𝜙∣ is operated upon by O and the result (another bra, remember) is destined to run into ket ∣𝜓⟩.

In the first approach, if we would like, we can move operator O right inside the ket brackets with the label 𝜓, making a new ket:

(2.20) O ∣𝜓⟩ = ∣O𝜓⟩,

in which we ar doing is changing the vector to which the ket refers, from 𝜓̄ to the vector produced by operating Ō on 𝜓̄. It's that new ket that forms an inner product with ⟨𝜙∣ in the expression ⟨𝜙∣ O ∣𝜓⟩.

But when we move the operator O inside the bra ⟨𝜙∣, we must change the operator and the change is called taking "adjoint" of the operator, written as O^{†}. So the process of moving operator O from outside yo the inside a bra like this:

(2.21) ⟨𝜓∣ O = ⟨O^{†}𝜓∣,

So the bra ⟨O^{†}𝜓∣ is the dual of ket ∣O^{†}𝜓⟩.

Finding the adjoint of an operator in matrix straightforward. Just take the complex conjugate of each element of the matrix, and then form the transpose of the matrix. If operator O has the matrix representation

(2.22) _{matrix}O = _{matrix}[_{row-1}(𝑂_{11} 𝑂_{12} 𝑂_{13}) _{row-2}(𝑂_{21} 𝑂_{22} 𝑂_{23}) _{row-3}(𝑂_{31} 𝑂_{32} 𝑂_{33})]

then is adjoint O^{†} is

(2.23) _{matrix}O^{†} = _{matrix}[_{row-1}(𝑂^{*}_{11} 𝑂^{*}_{21} 𝑂^{*}_{31}) _{row-2}(𝑂^{*}_{12} 𝑂^{*}_{22} 𝑂^{*}_{32}) _{row-3}(𝑂^{*}_{13} 𝑂^{*}_{23} 𝑂^{*}_{33})]

If we apply the conjugate-transpose process to a column vector, we we'll see that the Hermitian adjoint of a ket is the associated bra:

∣𝐴⟩ = _{column matrix}(𝐴_{1} 𝐴_{2}1 𝐴_{3})

∣𝐴⟩^{†} = (𝐴^{*}_{1} 𝐴^{*}_{2} 𝐴^{*}_{3}) = ⟨𝐴∣.

If O transforms ket ∣𝜓⟩ into ket ∣𝜓'⟩, then O^{†} transforms bra ⟨𝜓∣ into bra ⟨𝜓'∣. In equations this is

(2.24) O ∣𝜓⟩ = ∣𝜓'⟩ ⟨𝜓∣ O^{†} = ⟨𝜓'∣,

We should also aware that it's perfectly acceptable to evaluate an expression such as ⟨𝜓∣ O whithout moving operator inside

the bra. so if ∣𝜓⟩, ⟨𝜓∣ and O are given by

∣𝜓⟩ = _{column matrix}(𝜓_{1} 𝜓_{2}) ⟨𝜓∣ = (𝜓^{*}_{1} 𝜓^{*}_{2}) _{matrix}O = [row-1(𝑂_{11} 𝑂_{12}), row-2(𝑂_{11} 𝑂_{12})]

then

(2.25) ⟨𝜓∣ O = (𝜓^{*}_{1} 𝜓^{*}_{2}) _{matrix}[row-1(𝑂_{11} 𝑂_{12}) row-2(𝑂_{11} 𝑂_{12})] = [(𝜓^{*}_{1}𝑂_{11} + 𝜓^{*}_{2}𝑂_{21}) (𝜓^{*}_{1}𝑂_{12} + 𝜓^{*}_{2}𝑂_{22}),

which is the same result as ⟨O^{†}𝜓∣:

(2.26) _{matrix}O^{†} = [row-1(𝑂^{*}_{11} 𝑂^{*}_{21}) row-2(𝑂^{*}_{12} 𝑂^{*}_{22})]

⟨O^{†}𝜓∣ = ∣O^{†}𝜓⟩^{†} = (O^{†} ∣𝜓⟩^{†})^{†} = [_{matrix}{row-1(𝑂^{*}_{11} 𝑂^{*}_{21}) row-2(𝑂^{*}_{12} 𝑂^{*}_{22})} _{column matrix}(𝜓_{1} 𝜓_{2})]^{†}

= _{column matrix}[(𝜓_{1}𝑂^{*}_{11} + 𝜓_{2}𝑂^{*}_{21}) (𝜓_{1}𝑂^{*}_{12} + 𝜓_{2}𝑂^{*}_{22})]^{†} = [(𝜓^{*}_{1}𝑂_{11} + 𝜓^{*}_{2}𝑂_{21}) (𝜓^{*}_{1}𝑂_{12} + 𝜓^{*}_{2}𝑂_{22})],
in agreement with (2.25).

So we can see the equivalence of following expression:

(2.27) ⟨𝜙∣ O ∣𝜓⟩ = ⟨𝜙∣O𝜓⟩ = ⟨O^{†}𝜙∣𝜓⟩.

Those operator are called "Hermitian", and their defining characteristic is this: Hermatian operators equal their own adjoints. So if O is a Hermitian operator, then

(2.28) O = O^{†} (Hermitian O)

Comparing Eqs. 2.22 and 2.23 we can we can see that for a Hermitian, The diagonal elements must all be real and off-diagonal element must equals the complex conjugate of the corresponding element on the other side of the diagonal.

Thus operator O equals its adjoint O^{†}, then

(2.29) ⟨𝜙∣ O ∣𝜓⟩ = ⟨𝜙∣O𝜓⟩ = ⟨O^{†}𝜙∣𝜓⟩ = ⟨O𝜙∣𝜓⟩,

which means that a Hermitian operator may be applied to *either* member of an inner product with the same result.

For complex continuous functions such as 𝑓(𝑥) and 𝑔(𝑥), the equivalent to Eq. 2.29 is

(2.30) ∫_{-∞}^{∞}𝑓(𝑥)^{*}[O𝑔(𝑥)] 𝑑𝑥 = ∫_{-∞}^{∞}[O^{†}𝑓(𝑥)^{*}]𝑔(𝑥) 𝑑𝑥 = ∫_{-∞}^{∞}[O𝑓(𝑥)^{*}]𝑔(𝑥) 𝑑𝑥.

Thus if ∣𝜓⟩ is an eigenket of O with eigenvalue 𝜆, then ∣O𝜓⟩ = ∣𝜆𝜓⟩ and ⟨O𝜙∣ = ⟨𝜆𝜙∣, so

(2.32) ⟨𝜓∣𝜆𝜓⟩ = ⟨𝜆𝜓∣𝜓⟩.

For kets, we can move a constant, even if that constant is complex, without changing the constant. So

(2.33) 𝑐∣𝐴⟩ = ∣𝑐𝐴⟩. because 𝑐∣𝐴⟩ = 𝑐_{column matrix}(𝐴_{x} 𝐴_{y} 𝐴_{z}) = _{column matrix}(𝑐𝐴_{x} 𝑐𝐴_{y} 𝑐𝐴_{z}) = ∣𝑐𝐴⟩

But for bras, if we want move a constant form one side to other side of a bra, it's necessary to take the complex conjugate of that conjugate of that constant:

(2.34) 𝑐⟨𝐴∣ = ⟨𝑐^{*}𝐴∣,

because in this case

𝑐⟨𝐴∣ = 𝑐(𝐴^{*}_{x} 𝐴^{*}_{y} 𝐴^{*}_{z}) = (𝑐𝐴^{*}_{x} 𝑐𝐴^{*}_{y} 𝑐𝐴^{*}_{z}) = ((𝑐^{*}𝐴_{x})^{*} (𝑐^{*}𝐴_{y})^{*} (𝑐^{*}𝐴_{z})^{*}) = ⟨𝑐^{*}𝐴∣.

Thus in (2.32), pulling the constant 𝜆 out of the ket on the left side and out of the bra on the right side gives

(2.35) ⟨𝜓∣𝜆𝜓⟩ = 𝜆^{*}⟨𝜓∣𝜓⟩.

And from (∣𝐴⟩ = 𝐴_{1}∣𝜖_{1}⟩ + 𝐴_{2}∣𝜖_{2}⟩ + ∙∙∙ 𝐴_{𝑁}∣𝜖⟩2.19) Eq. 2.35 gives

(2.36) 𝜆⟨𝜓∣𝜓⟩ = 𝜆^{*}⟨𝜓∣𝜓⟩.

Hence 𝜆 = 𝜆^{*}, which means that the eigenvalue 𝜆 must be real. So Hermitian operators must have real eigenvalues.

Consider the case in which 𝜙 is an eigenfunction of Hermitian operator O with eigenvalue 𝜆_{𝜙} and 𝜓 is also an eigen function of O with eigenvalue 𝜆_{𝜓}. Eq. 2.29 is then

⟨𝜙∣ O ∣𝜓⟩ = ⟨𝜙∣𝜆_{𝜓}𝜓⟩ = ⟨𝜆_{𝜙}𝜙∣𝜓⟩ 𝜆_{𝜓}⟨𝜙∣𝜓⟩ = 𝜆^{*}_{𝜙}⟨𝜙∣𝜓⟩ = 𝜆_{𝜙}⟨𝜙∣𝜓⟩ (𝜆_{𝜓} - 𝜆_{𝜙})⟨𝜙∣𝜓⟩ = 0.

This means that because (𝜆_{𝜓} - 𝜆_{𝜙}) is usually not zero, ⟨𝜙∣𝜓⟩ = 0 and the eigenfunctions of a Hermitian operator with different eigenvalues must be orthogonal.

And if two or more eigenfunctions share an eigenvalue, that's called the "**degenerate"** case, and the the eigenfunctions will not, in general, orthogonal. So in the degenerate case, there are an infinite number of non-orthogonal eigenfunctions, from which you can always construct an orthogonal set.

There's one more useful characteristic of the eigenfunctions of a **Hermitian operator**: they form a complete set. That means that any function in the abstract vector space containing the eigenfunctions of a Hermitian operator may be made up a linear combination of those eigenfunctions.

In the discussion of the solutions to the Schrodinger equation in Chapter 4 will show that every quantum observable (such as position, momentum, and energy) is associated with an operator, and the possible results of any measurement are given by the eigenvalues of that operator. Since the results of measurements must be real, operators associated with observables must be Hermitian.

**2.4 Projection Operators**

A very useful Hermitian operator is we wll encounter in quantum mechanics is the "projection operator". To understand it, consider the ket representing thre-dimensional vector 𝐴̄. Expanding that ket using the basis kets representing orthonormal vectors 𝜖_{1}̄, 𝜖_{2}̄ and 𝜖_{3}̄ looks like this:

(2.37, 38, 39) ∣𝐴⟩ = 𝐴_{1}∣𝜖_{1}⟩ + 𝐴_{2}∣𝜖_{2}⟩ + 𝐴_{3}∣𝜖_{2}⟩ = ⟨𝜖_{1}∣𝐴⟩ ∣𝜖_{1}⟩ + ⟨𝜖_{2}∣𝐴⟩ ∣𝜖_{2}⟩ + ⟨𝜖_{3}∣𝐴⟩ ∣𝜖_{3}⟩ = 𝜖_{1}⟩ ⟨𝜖_{1}∣𝐴⟩∣ + ∣𝜖_{2}⟩ ⟨𝜖_{2}∣𝐴⟩ + ∣𝜖_{3}⟩ ⟨𝜖_{3}∣𝐴⟩

in which we can consider grouping ∣𝜖_{1}⟩ ⟨𝜖_{1}∣, ∣𝜖_{2}⟩ ⟨𝜖_{2}∣ and ∣𝜖_{3}⟩ ⟨𝜖_{3}∣ as projection operators. The general expression for a projection operator is

(2.41) 𝑃^{^}_{𝑖} = ∣𝜖_{𝑖}⟩ ⟨𝜖_{𝑖}∣,

where 𝜖_{𝑖}̄ is any normalized vector. Feeding operator 𝑃^{^}_{1} the ket representing vector 𝐴̄ shows what's happening:

(2.42) 𝑃^{^}_{1} ∣𝐴⟩ = ∣𝜖_{1}⟩ ⟨𝜖_{1}∣𝐴 = 𝐴_{1} ∣𝜖_{1}⟩

So applying the projection operator to ∣𝐴⟩ produces new ket 𝐴_{1} ∣𝜖_{1}. For completeness, the other 𝑃^{^}_{2} and 𝑃^{^}_{3} to ∣𝐴⟩ are

(2.43) 𝑃^{^}_{2} ∣𝐴⟩ = ∣𝜖_{2}⟩ ⟨𝜖_{2}∣𝐴 = 𝐴_{2} ∣𝜖_{2}⟩ 𝑃^{^}_{3} ∣𝐴⟩ = ∣𝜖_{3}⟩ ⟨𝜖_{3}∣𝐴 = 𝐴_{2} ∣𝜖_{3}⟩.

If we sum the results, then the result is

𝑃^{^}_{1} ∣𝐴⟩ + 𝑃^{^}_{2} ∣𝐴⟩ + 𝑃^{^}_{3} ∣𝐴⟩ = 𝐴_{1} ∣𝜖_{1}⟩ + 𝐴_{2} ∣𝜖_{2}⟩ + 𝐴_{3} ∣𝜖_{3}⟩ = ∣𝐴⟩ or (𝑃^{^}_{1} + 𝑃^{^}_{2} + 𝑃^{^}_{3}) ∣𝐴⟩ = ∣𝐴⟩.

Writing this for the general case in an 𝑁-dimensional space:

(2.44) ∑_{𝑖=1}^{𝑁}𝑃^{^}_{𝑛} ∣𝐴⟩ = ∣𝐴⟩.

This means that the sum of the projection oprators using all of the basis vectors equals the "identity operator" I. The identity operator is the Hermitian operator that produces a ket that is equal to the ket that is fed into the operator:

(2.45) I ∣𝐴⟩ = ∣𝐴⟩.

The matrix representation 𝐼_{matrix} of the identity operator in three dimensions is

(2.46) _{matrix}𝐼 = [_{row-1}(1 0 0) _{row-2}(0 1 0) _{row-3}(0 0 1)].

The relation

(2.47) ∑_{𝑖=1}^{𝑁}𝑃^{^}_{𝑛} ∣𝐴⟩ = ∑^{𝑁}_{𝑛=1} ∣𝜖_{𝑛}⟩ ⟨𝜖_{𝑛}∣ = I

is called the "completeness" or **"closure" relation**, since it holds true when applied to any ket in an 𝑁-dimensional space. That means that any ket in that space can be represented as the sum of 𝑁 basis kets weighted by 𝑁 components.

The projection operator in an 𝑁-dimensional space may be represented by an 𝑁⨯𝑁 matrix as in (2.16) and it's necessary to decide which basis system you'd like to use.

One option is to use the basis system consisting of the eigenkets of the operator. In the matrix representing an operator is diagonal, and each of the diagonal elements is an eigenvalue of the matrix.

For 𝑃^{^}_{1} the eigenket equation is

(2.48) 𝑃^{^}_{1} ∣𝐴⟩ = 𝜆_{1} ∣𝐴⟩,

where ∣𝐴⟩ is an eigenket of 𝑃^{^}_{1} with eigenvalue 𝜆_{1}. With ∣𝜖_{1}⟩ 𝑃^{^}_{1} gives
𝑃^{^}_{1} ∣𝜖_{1}⟩ = ∣𝜖_{1}⟩ ⟨𝜖_{1}∣𝜖_{1}⟩ = 𝜆_{1} ∣𝜖_{1}⟩ 𝜆_{1} = 1

Hence ∣𝜖_{1}⟩ is indeed an eigenket of 𝑃^{^}_{1}, and the eigenvalue is one. Similarly ∣𝜖_{2}⟩ and ∣𝜖_{3 }⟩ are eigenkets of 𝑃^{^}_{1} with eigenvalues of 0 and 0 respectively. With these eigenkets the matrix elements (𝑃_{1})_{𝑖𝑗} can be found using Eq. 2.16:

(2.49) (𝑃_{1})_{𝑖𝑗} = ⟨𝜖_{1}∣ 𝑃^{^}_{1} ∣𝜖_{1}⟩.

Then (𝑃_{1})_{11} = 1 and the rest of (𝑃_{1})_{𝑖𝑗} are all zero.

Thus the matrix representing 𝑃^{^}_{1} in the basis of its eigenkets ∣𝜖_{1}⟩, ∣𝜖_{2}⟩, and ∣𝜖_{3}⟩ is

(2.50) _{matrix}𝑃_{1} = _{matrix}[_{row-1}(1 0 0) _{row-2}(0 0 0) _{row-3}(0 0 0)].

Through a similar analysis we obtain

(2.51, 52) _{matrix}𝑃_{2} = _{matrix}[_{row-1}(0 0 0) _{row-2}(0 1 0) _{row-3}(0 0 0)] _{matrix}𝑃_{3} = _{matrix}[_{row-1}(0 0 0) _{row-2}(0 0 0) _{row-3}(0 0 1)].

According to the completeness relation Eq. 2.47, the matrix representation of the projection operators should added up to the identity operator:

(2.53) _{matrix}𝑃_{1} + _{matrix}𝑃_{2} + _{matrix}𝑃_{3} = _{matrix}I.

An alternative method of finding the matrix elements of the projection operator is to use outer product rule for matrix multiplication,

𝑃^{^}_{1} = ∣𝜖_{1}⟩ ⟨𝜖_{1}∣ = _{column matrix}(1 0 0) (1 0 0) = _{matrix}[_{row-1}(1 0 0) _{row-2}(0 0 0) _{row-3}(0 0 0)]

𝑃^{^}_{2} = ∣𝜖_{2}⟩ ⟨𝜖_{2}∣ = _{column matrix}(0 1 0) (0 1 0) = _{matrix}[_{row-1}(0 0 0) _{row-2}(0 1 0) _{row-3}(0 0 0)]

𝑃^{^}_{3} = ∣𝜖_{3}⟩ ⟨𝜖_{3}∣ = _{column matrix}(0 0 1) (0 0 1) = _{matrix}[_{row-1}(0 0 0) _{row-2}(0 0 0) _{row-3}(0 0 1)].

The projection operator is useful in determining the probability of measurement outcomes for a quantum observable by projecting the state of a system onto the eigenstates of the operator for that observable.