There are two different ways to multiply Scalars, Vectors and Matrices. The first is element by element multiplication where each corresponding element of the first array multiplies the corresponding element of the second arrays and both arrays are equally sized. Then there is array multiplication where the inner dimensions of the first array (number of columns) must match the inner dimensions of the second array (number of rows). In Numerical Python NumPy there are four functions for multiplication. The most commonly used are multiply for element by element multiplication and dot respectively for array multiplication respectively. Due to the way that Python handles vectors, there is an additional function called outer which is used for obtaining the outer dot product of arrays of equal dimensions. Inner accompanies outer but more or less replicates dot.

## Scalar

Starting off we a scalar these may appear to at first glance look identical. For instance if we purchase a single object type such as 5 pens from a single store at a fixed price of £2. Then the total spend per item type is £10 and the total spend on all items is also £10 because we only purchased one item type.

```
[10]
10
```

The results of all 2 values are the same 10 as expected. However under closer observation the total_cost_per_item has [ ] and the total_cost doesn’t which shows that there is a difference between these different types of multiplications. This difference corresponds to dimensionality and can be observed when we move to more complicated objects such as vectors.

## Vectors

When we move to a Vector we can immediately see the difference between multiply which gives the total spent on each item and dot which gives the total spent.

```
[10 18]
28
```

The first method used element by element multiplication and in mathematics the first method would be written out as:

The second method would be written out as:

q on the left hand side has 1 row by 2 columns and p has 2 rows by 1 column. For vector multiplication to take place, the inner dimensions must match i.e. columns of q and rows of p have to be equal and this condition is satisfied.

Now if we look at the variable explorer and look at the vector p, we can see that it is listed as a row vector:

However when we expand it, is it listed as a column vector:

So you may be tempted to ask, is p a row vector or is it a column vector?

well lets look at taking the transpose of p:

And the transpose of p, pt is identical to p…

In essence Python represents all vectors in the manner above and will treat the vector as either a row or column depending on the circumstance. So in the variable editor, in order to easiest see the values on the limited screen space it is listed as a row and when the variable is maximised, it is displayed as a column. This is because when one is plotting data, they typically have a column of data to plot and this is a common use for a vector.

In the case of element by element operations such as addition, subtraction, element by element multiplication and the dot product, it doesn’t matter if the data is represented as a row or column, one obtains the same results. Moreover when a vector is being multiplied by a matrix. Python will automatically rotate the vector to facilitate multiplication, i.e. so the inner dimensions match. This works well in most cases… for example:

`[22 28]`

This means that automatically, the vector has been selected as a row and the inner dimensions are 3 allowing multiplication to take place.

Modifying the code so that q is a matrix and p is a vector:

`[14 32]`

This means the automatically the vector has been selected as a column and the inner dimensions are 3 once again allowing multiplication to take place.

This automatic selection works in most cases, except for the case when the vector dimensions match… In the sample below there is a clear difference between outer vector multiplication and inner vector multiplication as depicted below:

By default the function dot will calculate the inner product as we seen earlier. We tried earlier to transpose the vectors and nothing happened, if we try and transpose them whilst they are an input argument in the NumPy function dot we will once again see nothing happens:

`28`

`28`

This means we cannot use dot to carry out the above problem. To get around this, there is a function called outer, which will orientate the vectors so the inner dimensions of 1 match:

$latex \displaystyle \left[ {\begin{array}{*{20}{c}} 5 \ 3 \end{array}} \right]*\left[ {\begin{array}{*{20}{c}} 2 & 6 \end{array}} \right]$

```
[[10 30]
[ 6 18]]
```

For outer multiplication, the largest dimension of the vector is taken to be on the outside of the multiplication:

The Vector to the left hand side has 2 Rows by 1 Column and the Vector to the right hand side has 1 Row by 2 Columns. The inner dimensions of 1 match so multiplication can take place. In other words the Vector to the left hand side is a Column Vector and the Vector to the right hand side is a Row Vector.

For inner multiplication, the largest dimension of the vector is taken to be on the inside of the multiplication:

The Vector to the left hand side has 1 Row by 2 Columns and the Vector to the right hand side has 2 Rows by 1 Column. The inner dimensions of 2 match so multiplication can take place. In other words the Vector to the left hand side is a Row Vector and the Vector to the right hand side is a Column Vector.

```
[[10 30]
[ 6 18]]
28
```

In addition to the function outer, is the function inner, this however just replicates the functionality of the function dot giving the inner product for the two vectors but it is useful to be able to specify inner or outer when dealing with two equal sized vectors just for clarity.

If outer is used with matrices, it will convert the matrices into vectors so be careful:

```
[[1]
[2]
[3]
[4]]
```

`[[1 2 3 4]]`