Spyder is a Python Integrating Development Environment (IDE) which has a similar user interface to Matrix Laboratory (MATLAB) which is a Commercial product commonly used in the physical sciences and engineering fields. Python and the Spyder IDE are open-source and can be freely downloaded using the Anaconda Python Distribution.
This particular guide examines array manipulation using the Numeric Python library (numpy). This guide is geared at beginners who want to learn scientific computing and those that are transferring from MATLAB. Before stating with the Numeric Python library, you should have Python installed and should be familiar with concepts behind the Core Python library.
Python with the numeric python library (numpy), the plotting library (matplotlib) and the Python and data analysis library (pandas) can be used to replace the commercial product MATLAB for most applications.
Table of contents
- Core Python
- matlab vs python
- practical applications of arrays
- core python: nested lists
- The numpy library: numpy arrays
- Importing the numpy library
- attributes and methods
- the reshape function: explicitly setting a vector to a row or column
- element by element operations
- indexing, reshaping and concatenation practice
- functions for rapid array generation
- sorting data in numpy arrays
- statistical functions
- element by element multiplication vs array multiplication
- array division and interpolation
- Importing the numpy library
If you haven't already installed Python and Spyder 4 using the Anaconda Installation see.
It is not recommended to jump straight into the Numeric Python Library until you have a grasp of Core Python. I have a beginners guide on Core Python here:
matlab vs python
If you are migrating from matlab to python there are a lot of similarities between the syntaxes used however there are also some major differences when it comes to array manipulation. Take particular note of:
|1st order indexing||0th order indexing|
|2 delimiters||1 delimiter (nested lists)|
|, new column||lists are general purpose|
can be of mixed data types
|; new row||numpy arrays are used for numeric |
(mainly for numeric data,
(used for general purposes)
(used for numeric data)
(used for plotting)
|import numpy as np|
|explicitly defined||duel behaviour and single dimension|
unless specifically defined
|print – shows by default||print – hides by default|
|end line in ; to hide||Use print() to explicitly show|
practical applications of arrays
You may be familiar with scalars (0d arrays) vectors (1d arrays), matrices (2d arrays), books (3d arrays) and nd arrays. These are often taught in mathematics classes with a poor description of the practical applications. Therefore to begin we will look at some of the practical applications of arrays and their application to science and computer science.
scalars (0 dimensional arrays)
A scalar is just a single number which you are used to working with when performing a simple calculation.
x=1 y=2 x+y
vectors (1 dimensional arrays)
A vector is a series of numbers. Vectors are commonly used to store a series of linked values. For example if a velocity measurement is carried out at time 0, giving a velocity of 0, time 10 giving a velocity of 227.04, time 15 giving a velocity of 362.78, time 20 giving a velocity of 517.35, time 22.5 giving a velocity of 602.95 and time 330 giving a velocity of 901.67. The data may be stored as two vectors.
These equally sized vectors can then be feed into plotting programs for example matplotlib.pyplot which I will discuss in a later guide and graphed out to visually represent the dataset.
matrices (2 dimensional arrays)
A matrix is a series of vectors or if you like a series of equally sized series. It is better visualised as an example here is something which you may be familiar with, a black and white picture. In this case it is opened within python and an x-scale and y-scale are shown. The numbers on each scale denote the x position and y position of each pixel respectively. The third axis the z-scale denotes the brightness which is normalised between 0 and 1 in this case.
We can print this image onto a piece of paper using a so called "dot matrix printer". New printers are out but we will use the term dot matrix as it explains exactly what a printer does. The dot matrix printer essentially treats the piece of paper as a grid and to each point in the grid prints a dot of ink. If the pixel is to be white, no ink is added and if the pixel is black, the maximum amount of ink is added. If the shade is to be intermediate, an intermediate level of ink is added. Each dot can be translated to a number which represents the level of ink in the dot. A common printing resolution is 600 dpi meaning there are 600 dots per inch on the piece of paper. This translates to a piece of A4 paper merely being treated as a matrix that has 7014 rows and 4962 columns. The product of these scalar values gives 34,803,468 pixels which is abbreviated to ~35 million pixels or 35 mega pixels.
books (3 dimensional arrays)
We have seen earlier that a single black and white image is a 2 dimensional array. We can create a book of black and white images also known as a 3 dimensional array. In a book, each page (2d array) is the same size and has the same number of rows and columns. The third dimension is the page number. A black and white video can also be thought of as being in the form of a book where we call the pages, frames. A common refresh rate in computers is 60 frames per second. This means our computer has flicked through 60 pages in a black and white book every second when watching the black and white video.
Visible light is defined as electromagnetic radiation that can be detected by the human eye and the human eye can only see within a very small window which is 0.39-0.75×10-6 m out of the much larger 10-16 to 106 m scope. The human eye has three types of detectors, S-cones, M-cones and L-cones for Short, Medium and Long wavelength detection respectively. The average response from the three detectors in a human eye is as follows and this is of course the origin of the three "primary colors".
Light has wave and particle like properties. When we think of light as particles, we call the "particles of light" photons. The eye has three types of detectors known as S-cones, M-cones and L-cones which give the physiological reason behind the three primary colors. A single photon of light can be detected at 400 nm or 525 nm by an S-cone and the eye won’t know the difference between these two photons. However, for a large enough distribution of photons the brain will look at the ratio collected at the S-cones, M-cones and L-cones respectively and assign this intensity ratio to a "color". For instance, light at 400 nm has a high probability of detection at the S-cones and a comparatively low ratio of detection at the M-cones and L-cones and hence appears to the brain as the color "blue". On the other hand, light at 700 nm will give a low ratio of detection at the S-cones and M-cones and a high ratio of detection at the L-cones and so appear the color "red". Light at 500 nm will have a small ratio at the S-cones, have a high ratio of detection at the M-cones and a medium ratio of detection at the L-cones and appear to the brain as the color "green".
Three LEDs may be selected that are blue, green and red respectively, typical emission spectra of these LEDs are shown. These LEDs can be designed so that their output overlaps spatially.
If we examine the overlapping regime between the red and the green LEDs, both the M-cones and L-cones of the eye will detect light while the S-cones won’t. Using this ratio, the brain translates this as the color "yellow" i.e. what the brain receives from the eyes is a color ratio. However, it is important to note however that no yellow light is actually generated, and this is seen if one looks at the spectra of the LEDs. Using the spectra above when both the green and red LEDs are on, there is no output at 575 nm, there is actually a dip in the emission here.
The RGB LED triplet seen above is routinely miniaturised with a more accurate overlap. In such a configuration each LED can independently have it's brightness control varied. The brightness levels for each LED are typically set to 256 levels (8 bit) and the associated color combinations covers any color the eye can see. i.e. covers all the color ratios the brain recognises and maps to a color. This is known as additive addition. A single RGB triplet is known as a pixel. These RGB triplets are then built up into arrays. These RGB arrays are known as screens and used daily in electronic devices such as laptops, phones, cameras etc.
Screen resolution is usually defined in terms of pixels, in the case of the Dell XPS 13 this is 3200×1600 pixels which means 1600 rows and 3200 columns. A graphical computer program on this XPS 13 9365 for instance will send a command to the computer to process a 1800 row by 3200 column matrix of red values, a 1800 row by 3200 column matrix of green values, a 1800 row by 3200 column matrix of blue values and if it is running at 60 frames per second will do this every 0.017 s.
The image that we initially displayed in black and white originated from a colored photo. It is shown below with the intensity ratio of the red, green and blue channels respectively. Each (x,y) pixel therefore is merely a triplet of numbers ranging from 0-255.
In other words every image seen on a screen is a book of 3 numerical matrices or pages; a red page, a green page and a blue page. When you are looking at an image on a screen you are interacting with a 3D array.
Returning to print media, when we print an image subtractive addition of light is used opposed to additive addition. In subtractive addition, light is not generated but rather it is subtracted from room light. Room light is incident on a reflective surface such as a sheet piece of paper. This piece of paper appears to be white as all incident light is reflected. Inks are used to color the paper, first of all there is cyan dye which has chemicals which interacts and removes light in the red while reflecting blue and green light giving the cyan appearance. This can be thought of as white-red=green+blue=cyan. Then there is the magenta dye which removes light in the green and passes light in the red and blue. This can be thought of as white-green=red+blue=magenta. Then there is the yellow dye which removes the blue light and reflects the green and red light. This can be thought of as white-blue=green+red=yellow. Finally as these three inks do not act as perfect “primary color removers”, there is usually a black ink which removes all the white light and the absence of light appears black. Once again by fine tuning the levels and concentration of dyes, any color can be produced although in practice is more limited as there is variation in room light and day light from place to place and time to time. Subtractive addition is the basis of the arts.
4 dimensional and n dimensional arrays
A colored video (without any sound) is a series of 4 dimensional array. Each channel of a frame has the same number of columns (1D) and rows (2D). We need three channels for the three primary colors red, green and blue giving the third dimension (3D). A colored video can also be displayed on a screen with a frame rate of 60 frames per second giving the 4th dimensional (4D).
Array manipulation is therefore not some abstract form of mathematics but you likely routinely carry it when using your computer on a daily basis. Photo-editing a black and white photo is 2 dimensional array manipulation, photo-editing a colored image is 3 dimensional array manipulation and video-editing a colored but silent video at 60 frames per second is essence 4 dimensional array manipulation.
Additional signals such as sound can of course be added making it a more complicated higher dimensional array.
core python: nested lists
Let's explore some of the core classes in python and use these to look at very simple arrays. First let's look at a scalar, this is just a single number which you are well used to.
Now lets look at a list of scalars known as a vector.
We can double click the list to expand it in the variable explorer. Here we see that each item in the list has a value and an index.
It is insightful to visualise the numbers in the vector as an object. For convenience we will use the example of smarties sorted into compartments by color.
Each compartment or element has a compartment or element number beginning from 0 (zero order indexing) and moving up in steps of 1. We can think of these element numbers like door numbers.
Now as we are visualising the numbers as physical objects. We can see there is 1 cyan smarty in compartment 0, 2 green smarties in compartment 1, 3 red smarties in compartment 2 and 4 yellow smarties in compartment 3. This can be checked by looking at the compartment number which is done by indexing using square brackets and the compartment number (otherwise known as the element number or index).
Note that because the numbers are physical objects in compartments, a red smarty cannot magically become a green smarty and so on. This means should we remove the two green smarty from compartment 1 and eat them that the result will look like the following.
i.e. compartment 1 will still exist but have no smarties in it. Let's copy this to a new variable name using the method copy and then update the values of the green smarties in the copy.
And now let's create another compartment of 4 colored smarties.
We can now look at the three sets of four compartments of smarties which show in the variable explorer.
We can organise our compartments of smarties further by placing each compartment set of 4 into another list.
v0=[1,2,3,4] v1=[1,0,3,4] v2=[2,3,1,1] m0=[v0,v1,v2]
We can view this in the variable explorer. We see that we have an outside list with indexes 0,1 and 2 but each value of this list is also a list (known as a nested list).
Returning to the analogy of colored smarties in containers, this is the equivalent of putting a set of the smarties containers in another container.
We must first access the outer container and then the inner container to get to the green smarties. For example:
Which retrieves the values of the original vector v0. We can then select the green smarties by further indexing.
We can also directly create a nested list in a single line. For example:
Here each inner list has is enclosed in its own square brackets using commas as delimiters and these are enclosed in an outer set of square brackets once again using commas as delimiters.
We can then nest these 2 collections in another list.
v0=[1,2,3,4] v1=[1,0,3,4] v2=[2,3,1,1] m0=[v0,v1,v2] m1=[[1,2,1,1],[1,2,0,4],[2,2,1,1]] b0=[m0,m1]
Take the time to look at how m1 is constructed (highlight the brackets).
Once again we can visualise this using our smarties example. To get into the inner container we must first access the two outer containers.
So to get the green smarties we must first access the outer container, then the middle container and then the inner container.
b0 b0 b0
The book above can be created directly using:
b1=[[[1,2,3,4], [1,0,3,4], [2,3,1,1]], [[1,2,1,1], [1,2,0,4], [2,2,1,1]]]
Now let's look at the interaction between two scalars. Let's think of these as red smarties and assume we want to add them:
self=1 other=2 self+other
Now let's look at the interaction of a list of integer scalars with another list of integer scalars. We may try to add the list self with the list other.
However the + operator does not work in the same way as with the scalar case.
Instead the + operator performs a concatenation between the two lists and not numeric addition.
It is possible to perform the operation we want by using a loop however it is slightly cumbersome and not as elegant as the case where we added scalars together.
self=[1,2,3,4] other=[1,0,3,4] self+other result= for i in range(len(self)): result.append(self[i]+other[i]) print(result)
limitations of lists for numerical data
As we seen, viewing numerical data as multiple nested lists isn't the most intuitive.
Another complication with lists is that each index can store different data types, such as numerical ints and floats shown above but they can also contain, strings, booleans and as we have seen earlier other nested lists. Moreover each nested list can be a different size. This makes lists extremely versatility which is an advantage in some applications but this versatility is also a drawback in other applications where it increases the chances of an error. The simple for loop above wouldn't have worked if one of the entries in the list was input as a string for example. In other applications such as plotting we want a list of numeric data and the ability to introduce a string into a list increases the likelihood of introducing a value, that the plotting functions also won't recognise also resulting in an error.
In the diagram of smarties sketched above we can clearly see the color used to group each selection of smarties by. However if this dataset is represented by nested lists of different sizes, this grouping by color gets lost.
If the zeros are added so each list is the same size, then we know the 0th element of each nested list which depicts the cyan smarties is related to each other.
Default list methods are not designed for numerical operations such as addition and the + operator instead performs list concatenation.
The numpy library: numpy arrays
Importing the numpy library
Instead of using lists and nested lists for numeric data, we should instead use numpy numeric arrays. To use the numpy library we need to import it. As numpy is the most commonly used standard library for python, it is usually imported using a 2 letter abbreviation np.
import numpy as np
This short hand abbreviation saves time when calling up the multitude of functions within this standard library and cleans up the code especially when multiple numpy functions are called on the same line. Note although a one letter abbreviation such as n could technically be used. In general single letter abbreviations are avoided as single letters are routinely used as variable names and this could cause confusion particularly for beginners. Once numpy is imported, we can type in np followed by a dot . and tab ↹ to view the wide range of numpy classes, functions and modules available to use. The ndarray class is at the top.
However it is far more common to use the function array to convert the scalar, numerical lists or nested numerical lists into an instance of the ndarray class.
If we highlight this function we can see we are given one positional input argument object which is the object to be converted to a numpy array. We can create an integer 1 and assign it to the object name 1 (line 2). We can then create a numpy array using the function array called from the numpy module abbreviated as np (line 3). When calling this function we can set the positional input argument object to a.
When using a function, positional input arguments have to be input at the beginning in the order listed while keyword arguments can be assigned in any order (providing they are provided after the positional input arguments) and have a default value. If the keyword positional argument is not explicitly specified when using the function, the default value will instead be assigned.
import numpy as np a=1 a1=np.array(a)
Or we could set it to 1 directly.
attributes and methods
Now that we have created the very basic ndarray a1, let's type it into the console followed by a dot . and then tab ↹ which will allow us to access a list of attributes and methods.
Attributes can be thought of as objects that are referenced from another object.
Methods can be thought of as functions that are referenced from a object.
Attributes can be thought of as objects which belong to an object. This is usually some properties of the object. For example a complex number will have a real and imaginary component which are attributes of the number. A real number will have a real component that is non-zero and an imaginary component that is zero. The attributes real and imag can be used to get these components. Attributes are called without parenthesis as they do not have any input arguments. For example:
Yield separate arrays which correspond to the real and imaginary components of the arrays a1.
If we type in the object, then a dot and then the attribute name followed by a dot . and tab ↹ we can access methods and attributes which belong to the attribute (the attribute itself i.e. a1.real being considered as an object in it's own right).
For example from the object a1 we can use the attribute real to select the real component of the array a1 and assign it to the new object name b1. Then we can use another attribute imag to get the imaginary component from the new object b1 (there is no imaginary component so we get 0 as expected).
Alternatively in one line we can get the attribute of the attribute:
The important thing to note above is that a1.real is itself in its own right an object and as an object it can have it's own attributes and methods. Note that the attribute real cannot be called without reference to a1.
Methods are essentially functions that are looked up from an object. All objects of the same type will share these methods. Methods can be called from the object using the dot notation and like functions they must always be called with parenthesis.
For example the method max() will give the maximum value of the array (since it is a scalar with only one value of 1, it will return 1).
As attributes are also objects, they can also have methods:
Instead of having a scalar object, we can look at using a list.
Let's create the list v_list and use the function np.array on v_list to create the numpy array v without changing any of the default values of the keyword arguments.
# %% Perquisites import numpy as np # %% numpy arrays v_list=[1,2,3,4] v=np.array(v_list)
Notice the difference between v and v_list on the variable explorer. The numpy array is highlighted green and has a size (4,) and the list is highlighted yellow and has a size of 4. This is because the list will also have a single dimension, irregardless of having many lists are nested, the variable explorer will only show the dimension of the outer list. For a numpy array, the size will be shown for each dimension. The vector created has a dimension in the form of a tuple (4,) and as a consequence will neither be a row vector (1 row,4 columns) nor a column vector (4 rows,1 column) but rather by default will exhibit behaviour which is most convenient. This can be seen by looking at the variable explorer where it is displayed as a row for convenience. If however it is expanded, it will instead display as a column by default. We will explore this concept in a moment.
Note that the object created v is an instance of the ndarray class we can check this using the function:
This means it inherits a larger number of attributes (variables which belong to an object) and methods (functions which belong to an object). The list of attributes and methods can be accessed by typing in the instance name followed by a dot . and then a tab ↹. For example:
Recall that attributes are called without parenthesis. We can have a look at the attributes size, shape and dtype which return the number of elements as a scalar, the dimensionals of the array as a tuple and the datatype of the array respectively:
v.size v.shape v.dtype
In the above case, all the elements in v_list were integers so the dtype was automatically inferred by these values as int. We can explicitly change this to float using the keyword input argument dtype and setting it to float which overriding its default value None.
import numpy as np v_list=[1,2,3,4] v=np.array(v_list,dtype=float) v2=np.array(v_list,dtype=float)
Note the subtle difference in the variable explorer when the dtype is float. We can check the attributes of v2 using the dot . notation.
v2.size v2.shape v2.dtype
Let's now use the function np.array to convert the vector lists we made earlier into numpy arrays:
import numpy as np v0=np.array([1,2,3,4]) v1=np.array([1,0,3,4]) v2=np.array([2,3,1,1])
Now lets nest these into another list and convert it into a numpy array to get a matrix:
import numpy as np m0=np.array([[1,2,3,4],[1,0,3,4],[2,3,1,1]])
We can once again look at the attributes. We can use the attribute size to get the number of elements:
The attribute shape which returns a tuple containing with the 0th element corresponding to the number of rows and the 1st row corresponding to the number of columns respectively.
We can index into a matrix by the use of square brackets. Like the tuple above, we must specify the row as the 0th element and the 1st element respectively. Note that we only use a single set of square of brackets to index into a numpy array unlike the nested list where we effectively indexed into the outside list and then indexed into the inner list. For example to index into the 0th row and 1st column we would use:
Multiple elements may be selected by use of a list. For example if we wish to select the elements in rows 1 and 2 within column 1, we would index using the vector [1,2] and 1:
A colon can also be used to select an entire row or column. For example if we want column 1 we would use the colon : to select all rows and 1 to select the column:
The colon can be used to index from a lower bound to an upper bound but not including the upper bound (zero order indexing).
Note if a lower bound is not specified it is automatically taken to be 0 and is an upper bound is not specified it is taken to be the length of the dimension, in this case 3.
Let's create another matrix m1:
import numpy as np m1=np.array([[1,2,1,1],[1,2,0,4],[2,2,1,1]])
Now we can combine both these matrices into a 3D array. We can create this directly using:
import numpy as np b0=np.array([[[1,2,3,4],[1,0,3,4],[2,3,1,1]],[[1,2,1,1],[1,2,0,4],[2,2,1,1]]])
By default the data will be shown in matrix form as page 0.
However the shape attribute returns a tuple of length 2, (2,3,4) denoting 2 pages, the 0th element; 3 rows, the 1st element and 4 columns, the 2nd element.
We can flick through the pages using the slider in the variable explorer:
By default we are sliding via the axis 0 which corresponds to the pages. However we can instead swap to axis 1 which will give us the page numbers as rows and we can scroll through the 3 rows.
Finally we can set the axis to 2. Here the rows show as the rows. The pages show as the columns and we slide through the 4 columns.
Once again we index using square brackets. The 0th element corresponds to the page, the 1st element corresponds to the row and the 2nd element corresponds to the column consistent with the tuple returned from the attribute shape. Let's select the 1st page, 2nd row and 3rd column using:
the reshape function: explicitly setting a vector to a row or column
Let's create a new matrix:
import numpy as np m=np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]])
Now suppose we want to select a row, row 1 from this matrix.
If we index into m selecting row 1 and all columns : using a colon.
We can see that the vector selected only has a single dimension and while it is shown as a row on the variable explorer. When expanded, it instead displays as a column.
This dual behaviour is useful for many cases however we may wish to explicitly specify whether a vector is a row or column and we can use this using the numpy function reshape which takes an array as its first positional input argument followed by a tuple with the new dimensions to reshape the array by.
We are wanting to reshape this into a row. i.e. a single row where all the elements are in a different column. We will set the 0th element of the tuple to 1 and the 1st element to attribute size of row1 which we can calculate using row1.size.
We can also set the 1st element in the tuple to the value -1 which means all the elements in the original array are assigned to this dimension i.e. every element is in a different column:
Note the differences in the original row1 and r1 in the variable explorer including the tuple corresponding to the attribute shape and when they are opened up in the variable explorer. In the line above it is quite common to reassign the original variable name when explicitly setting it to a row. We can delete the variable r1 and perform an in place update of the variable name row1:
We can also select the 1st column using a similar procedure. This time all the elements in the original array will be in a different row so we use -1 for the 0th element in the tuple which corresponds to the number of rows:
Note the function reshape works by row order. We can demonstrate this by creating a new matrix m:
import numpy as np m=np.array([[1,2,3], [5,6,7]])
Then using the numpy function reshape and swapping the values of m rows and columns from its attribute shape.
Doing so again returns us back to the original matrix:
If we want to reshape by column order instead of row order, we should use the transpose of the matrix, which swaps the rows and columns with one another.
The transpose of the transpose returns one to the original values:
We can reshape m by using column order by using the transpose with the function reshape:
element by element operations
The special methods which map to key operators for example the special method __add__ which maps to the operator + also differ. In a ndarray the + operator will perform addition instead of concatenation.
If we have a selection of smarties called self sorted by color and another selection called other and we want to add them. Then to get the result we add the cyan smarties (0th element) of self and other together. Once we've done this we add the green smarties (1st element) of self and other together. Once we've done this we add the red smarties (2nd element) of self and other together. And finally once we've done this we and then the yellow smarties (3rd element) of self and other together. Note that each element in the array is treated separately and none of the colors gets mixed up. This is a visualisation of element by element addition.
In our case, our assignment is to the left hand side so we can depict this element by element depiction as:
import numpy as np self=np.array([1,2,3,4]) other=np.array([1,3,2,5]) result=self+other
This can also be done using the += operator. In this case the self instance is updated to the new values.
import numpy as np self=np.array([1,2,3,4]) other=np.array([1,3,2,5]) self+=other
As a consequence the general rule is that the two arrays self and other must have the same number of elements and the same shape to perform element by element operations. There are some exceptions to this rule however. If an array and a scalar are added, scalar expansion will be applied and if an array and a vector are added vector expansion will be applied.
import numpy as np self=np.array([[1,2,3],[1,0,3]]) other=2 result=self+other
Vector expansion can also be used, if the vector matches the number of columns as the matrix:
import numpy as np m=np.array([[1,2,3],[1,0,3]]) v1=np.array([1,2,3]) n=m+v1
Note that by default vector expansion will only apply if the number of columns of the matrix matches the vector. Vector expansion will not occur if the vector matches the number of rows of the matrix unless the vector is explicitly expressed as a column vector and will result in a ValueError: operands could not be broadcast together.
import numpy as np m=np.array([[1,2,3],[1,0,3]]) v2=np.array([1,2]) o=m+v2
Once the vector is explicitly expressed as a column vector, it can be added to the matrix and vector expansion will be automatically carried out:
import numpy as np m=np.array([[1,2,3],[1,0,3]]) v2=np.array([1,2]) v2=np.reshape(v2,(v2.size,1)) o=m+v2
If planning to perform a calculation with a vector and a matrix, for example element by element operations such as addition it is therefore recommended to express the vector as a row or a column directly. Many other operators will also perform element by element operations:
|+||+=||element by element addition|
|–||-+||element by element subtraction|
|*||*=||element by element multiplication|
|/||/=||element by element float division|
|//||//=||element by element integer division|
|%||%=||element by element modulus|
|^||^=||element by element exponentiation|
With numpy arrays the + operation performs addition so to concatenate two numpy arrays together we must use the numpy function concatenate. Let's create two arrays:
import numpy as np self=np.array([[1,2], [3,4]]) other=np.array([[5,6], [7,8]])
Now let's call the function concatenate from the numpy library using open parenthesis. Here we see that we have a positional input argument which is a tuple of the arrays to be concatenated and we have a keyword argument axis which has a default value of 0.
If we concatenate along axis=0 the default keyword argument value, then we expand each column using the concatenation. This means that each array must have the same number of columns to allow the concatenation. In this case all arrays being concatenated must have 2 columns.
Alternatively if we concatenate along axis=1, then we expand each row using the concatenation. This means that each array must have the same number of rows to allow the concatenation. In this case all arrays being concatenated must have 2 rows.
In our case as both matrices are square we can concatenate along columns (axis=0) or along rows (axis=1).
Note a vector must be explicitly expressed as a row vector or as a column vector in order to perform a concatenation otherwise one will get a ValueError.
For example, we can concatenate the following vector as a row, or a column respectively.
import numpy as np self=np.array([[1,2], [3,4]]) other=np.array([5,6]) row=np.reshape(other,(1,other.size)) result=np.concatenate((self,row))
import numpy as np self=np.array([[1,2], [3,4]]) other=np.array([5,6]) col=np.reshape(other,(other.size,1)) result1=np.concatenate((self,col),axis=1)
indexing, reshaping and concatenation practice
(1) Import the numpy library and manually create the following numpy array m:
(2) From m create the following selections; yellow, red, magenta, green and cyan. Ensure you explicitly set yellow and cyan to column vectors and red and magenta to row vectors.
(3) Concatenate the colored selections to create the reconstructed matrix n.
Hint create cyangreen, then redcyangreenmagenta and then redcyangreenmagentayellow.
# %% Perquisites import numpy as np # %% Part 1 Create m m=np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]]) # %% Part 2 Create Selections yellow=m[:,0] yellow=np.reshape(yellow,(yellow.size,1)) red=m[0,1:] red=np.reshape(red,(1,red.size)) magenta=m[1,1:] magenta=np.reshape(magenta,(1,magenta.size)) green=m[2:,1:3] cyan=m[2:,3] cyan=np.reshape(cyan,(cyan.size,1)) # %% Concatenate to make n cyangreen=np.concatenate((cyan,green),axis=1) redcyangreenmagenta=np.concatenate((red,cyangreen,magenta),axis=0) redcyangreenmagentayellow=np.concatenate((redcyangreenmagenta,yellow),axis=1) n=redcyangreenmagentayellow
functions for rapid array generation
The numpy library has several functions for rapid array generation. The function np.zeros() and np.ones() take in a shape as their input argument(s) and generate arrays where each element is 0 and 1 respectively. We can take advantage of scalar expansion and np.ones() to quickly generate an array where each element is a different constant value.
For a vector only a single scalar input argument is required however for a matrix a tuple can be input with the 0th index denoting the number of rows (axis=0) and the 1st index denoting the number of columns (axis=1). Tuples with additional dimensions can be made to make higher dimensional arrays.
# %% Perquisites import numpy as np # %% Array Generation a=np.zeros(4) b=np.zeros((2,3)) c=np.ones((2,3)) d=2*np.ones((2,3))
Note how the input scalar or input tuple corresponds to the size of the array shown on the variable explorer and when the numpy function shape is used.
The function np.diag() can be used to with an input vector to generate a square matrix where the diagonal is the list and every other element is 0. Alternatively, if the input is a square matrix then it will read the diagonal. The anti-diagonal is less common but can be accessed using the function fliplr() or flipud() which flip a matrix horizontally left right and vertically up down respectively. The related function flip() is used for a vector which has a single dimension and hasn't been explicitly expressed as a column or a row.
# %% Perquisites import numpy as np # %% Array Generation # a=np.zeros(4) # b=np.zeros((2,3)) # c=np.ones((2,3)) # d=2*np.ones((2,3)) e=np.diag([1,2,3,4]) f=np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16]]) g=np.fliplr(f) h=np.flipud(f) i=np.diag(f) j=np.diag(g)
As we can see, e has a diagonal of [1,2,3,4] with every element being assigned to 0.
We can compare f to g which is flipped left right horizontally:
We can compare f to h which is flipped up down vertically:
We can see that i is the diagonal of f:
And j is the antidiagonal of f:
The function arange can be used to quickly generate an array that increments by a step. This is useful if one is wanting to create unit spaced axes represneting all the pixels on a screen for example. Let's call it with open parenthesis and have a look at its input arguments. Here we see the input arguments are different from previous functions.
This function can be called using either positional arguments or keyword arguments. For clarity let's first to use this function with the three keyword arguments np.arange(start=0,stop=10,step=1). In this case we will start at start=0 and go up in steps of steps=1 to but not including a stop value (zero-order indexing) stop=10.
import numpy as np a=np.arange(start=0,stop=10,step=1)
We can also do this using three positional arguments np.arange(0,10,1) which correspond to the start, stop and step.
import numpy as np a=np.arange(start=0,stop=10,step=1) b=np.arange(0,10,1)
The function can also be supplied with a single positional argument np.arange(10). When only one positional input argument is assigned, it is taken as the stop. When only one positional argument is assigned the other arguments act as keyword arguments taking the default values of start=0 and step=1.
import numpy as np a=np.arange(start=0,stop=10,step=1) b=np.arange(0,10,1) c=np.arange(10)
If the function is called with the three keyword input arguments; start, stop and step then these can be positioned in any order
import numpy as np a=np.arange(start=0,stop=10,step=1) b=np.arange(0,10,1) c=np.arange(10) d=np.arange(step=1,stop=10,start=0)
However to prevent confusion it is recommended to always use them in the order start, stop and then step.
Earlier we manually created the matrix m:
We can recreate it using the functions arange and reshape. Note we will have to start at 1 and end at 17 due to zero order indexing.
import numpy as np m=np.arange(start=1,stop=17,step=1) m=np.reshape(m,(4,4))
Let's now try and create an array that starts at -2, stops at 2 and has a step of 0.5.
import numpy as np a=np.arange(start=-2,stop=2,step=0.5)
Once again because of zero order indexing, we start at the start value and go up in steps of step to a stop value but not including the stop value. In this case as the stop value is 2 and the step is 0.5, so the last value in the array is 2-0.5=1.5.
A related function to arange is the function linspace. Once again we can call it with open parenthesis to look at the input arguments. Like arange we have the keyword input arguments, start and stop but this time we have the input argument num opposed to step. This function does not use zero order indexing and the last value in the array will be the stop value.
Let's compare arange and linspace:
import numpy as np a=np.arange(start=-2,stop=2,step=0.5) b=np.linspace(start=-2,stop=2,num=9)
The function arange can also be used with 3 positional arguments. These have to be in the order of start, stop and num:
import numpy as np a=np.arange(start=-2,stop=2,step=0.5) b=np.linspace(start=-2,stop=2,num=9) c=np.linspace(-2,2,9)
sorting data in numpy arrays
There are several functions within the numpy library that act on scalars, vectors and matrices. The way these operate depends on their inputs and optional keyword input arguments. Let's look at the following vector:
We can sort out the elements in order:
We can also get the sorted indexes or sorted arguments of each elements. In this case the lowest value is 4 and it is at index 0:
The 0th element of the sorted arguments is 0.
The next lowest value is 5 and is at index 2.
This means the 1st element of the sorted arguments is 2:
And finally the element of highest value is 6 at index 1:
This means that the sorted arguments or sorted indexes are:
We can calculate these using:
import numpy as np v=np.array([4,6,5]) v_sorted=np.sort(v) v_arg_sorted=np.argsort(v)
And indexing into v using the sorted indexes or arguments v_arg_sorted, gives v_sorted:
We can flip v_sorted and v_arg_sorted to get reverse sorted arrays v_rsorted and v_rargsorted.
Now let's type in the function using open parenthesis to see details about the positional input arguments and keyword input arguments:
We see we have the keyword input argument axis with a default value of -1 which means the last axis will be used by default, in the case of a vector, this is the only axis so using the default value is fine. This can also be explicitly set to its only axis, axis=0. The other keyword arguments kind=None indicates that the default sorting algorithm 'quicksort' is used and the other keyword argument is only used for more complicated arrays which have multiple data types.
Let's have a look at a matrix m
import numpy as np m=np.array([[3,6,2],[8,1,4],[5,9,7]])
this has two axes, axis 0 which selects the row number and axis 1 which selects the column.
Recall when we index, we select the row (the value along axis 0) and then the column (the value along axis 1):
If we work along axis=0, then we sort data in each row of every column.
import numpy as np m=np.array([[3,6,2],[8,1,4],[5,9,7]]) m_sorted_axis0=np.sort(m,axis=0)
We can also sort the axis by the axis 1:
import numpy as np m=np.array([[3,6,2],[8,1,4],[5,9,7]]) m_sorted_axis1=np.sort(m,axis=1)
As the last axis is 1, the default value axis=-1 will refer to this axis.
import numpy as np m=np.array([[3,6,2],[8,1,4],[5,9,7]]) m_sorted=np.sort(m)
If select axis=None this will convert the matrix into a vector, concatenating row by row and then sort it:
import numpy as np m=np.array([[3,6,2],[8,1,4],[5,9,7]]) m_sorted=np.sort(m,axis=None)
So far we have looked at the function np.sort() called directly from the numpy library. Let's examine it using open parenthesis:
Here we see the positional and keyword input arguments. This can also be called as a method from numpy arrays.
Note when m.sort() is called as a method, the instance is already implied, in this case m so it doesn't show as a positional input argument.
Associated with sorting is finding the minimum and maximum values in an array.
We can also look at the minimum and maximum value using the functions np.amin() and np.amax() respectively. These functions have the alias np.min() and np.max() respectively. We can verify this by calling the functions without parenthesis which will give details about the function. Note when we call np.max we get <function numpy.amax(…)> showing that it is an alias.
We can see that there are common keyword arguments to the function np.sort() such as axis which also defaults to the last axis i.e. axis=-1.
We can also use np.argmin() and np.argmax() to find the index of the minimum and maximum argument respectively.
Let's create a simple vector and have a look at the minimum and maximum values. Let's first look at a vector:
The minimum value is 4 and the index of the minimum value is 0.
The maximum value is 9 and the index of the minimum index is 1. Let's create the vector and look at the minimum and maximum values including their indexes:
import numpy as np v1=np.array([4,9,7]) v1_min=np.amin(v1) v1_arg_min_idx=np.argmin(v1) v1_max=np.amax(v1) v1_arg_max_idx=np.argmax(v1)
If we want the second lowest number opposed to the minimum we can use np.argsort() and np.argwhere() to find the second lowest index 1. Note we need to unpack the tuple and convert it into an integer in order to use it to index.
v1_arg_sorted=np.argsort(v1) np.where(np.argsort(v1)==1) (idx,)=(np.where(np.argsort(v1)==1)) idx=int(idx) v1[idx]
So far we have just looked at a single vector as an input argument however we can also compare two vectors by combining them together to make a matrix:
import numpy as np v1=np.array([4,9,7]) v1=np.array([5,1,2]) m1=np.array([v1,v2])
For the function np.amax() the default value is axis=None which will treat the matrix as a single vector, concatenating each row together and then find the maximum element.
import numpy as np v1=np.array([4,9,7]) v2=np.array([5,1,2]) m1=np.array([v1,v2]) m1max=np.amax(m1) m1argmax=np.argmax(m1)
If we work along axis=0, then we find the max in each row of every column.
In this case the output will be a vector. We will use the np.reshape() function to explicitly express the output as a row vector:
import numpy as np v1=np.array([4,9,7]) v2=np.array([5,1,2]) m1=np.array([v1,v2]) m1max_axis0=np.amax(m1,axis=0) m1argmax_axis0=np.argmax(m1,axis=0) m1max_axis0=np.reshape(m1max_axis0,(1,m1max_axis0.size)) m1argmax_axis0=np.reshape(m1argmax_axis0,(1,m1argmax_axis0.size))
In our case this is what we wanted as we initially expressed the matrix as two vectors v1 and v2.
If we work along axis=1, then we find the max in each column of every row. This is also the last axis for a matrix, axis=-1.
import numpy as np v1=np.array([4,9,7]) v2=np.array([5,1,2]) m1=np.array([v1,v2]) m1max_axis1=np.amax(m1,axis=1) m1argmax_axis1=np.argmax(m1,axis=1) m1max_axis1=np.reshape(m1max_axis1,(m1max_axis1.size,1)) m1argmax_axis1=np.reshape(m1argmax_axis1,(m1argmax_axis1.size,1))
So far, we have seen sorting and finding the minimum and maximum value in a vector and matrix where the operation on the matrix used the additional keyword input argument. For a matrix; if we work along axis=0 then we operate row by row, axis=1 then we operate column by column, if we act along the last axis=-1 this corresponds to axis=1 and if we act along axis=None then we convert the matrix into a vector by concatenating each row and then act on this vector. This theme is inherent for many other statistical functions. Let's look at the vector v1 and matrix m1:
By default the keyword argument axis=None, the same results would show for a vector if axis=0:
For the matrix m1, by default the keyword argument is axis=None:
So the matrix will be concatenated row by row:
The sum of the vector will be calculated:
If we instead set axis=0, we will act on each row of a column:
This will return a row vector (we will need to explicitly designate it as a row):
If we set axis=1, we will act on each column of a row.
This will return a column vector (we will need to explicitly designate it as a column):
Because this is the last axis, axis=-1 will return the same result.
import numpy as np v1=np.array([4,9,7]) v2=np.array([5,1,2]) m1=np.array([v1,v2]) sum_v1=np.sum(v1) sum_m1=np.sum(m1) sum_m1_axis0=np.sum(m1,axis=0) sum_m1_axis0=np.reshape(sum_m1_axis0,(1,sum_m1_axis0.size)) sum_m1_axis1=np.sum(m1,axis=1) sum_m1_axis1=np.reshape(sum_m1_axis1,(sum_m1_axis1.size,1))
The mean is just the sum value divided by the number of elements used to calculate the sum value.
It can be calculated from the data above using:
(v1_len,)=np.shape(v1) mean_v1=sum_v1/v1_len m1_size=np.size(m1) mean_m1=sum_m1/m1_size (m1_rows,m1_cols)=np.shape(m1) mean_m1_axis0=sum_m1_axis0/m1_rows mean_m1_axis1=sum_m1_axis1/m1_cols
Or more directly using the function np.sum().
import numpy as np v1=np.array([4,9,7]) v2=np.array([5,1,2]) m1=np.array([v1,v2]) mean_v1=np.mean(v1) mean_m1=np.mean(m1) mean_m1_axis0=np.mean(m1,axis=0) mean_m1_axis0=np.reshape(mean_m1_axis0,(1,mean_m1_axis0.size)) mean_m1_axis1=np.mean(m1,axis=1) mean_m1_axis1=np.reshape(mean_m1_axis1,(mean_m1_axis1.size,1))
A metric which expresses how much members of a group differ from the mean value is useful. Because by definition the sum of the difference of the mean and each value will equal 0 so we instead look at the difference of each value to the mean and square it. This is known as the variance:
Here the denominator is n which represents zero degrees of freedom. For a single sample the error will always be 0 because the numerator will be 0 and the denominator will be 1. This is unrealistic as the result of a single sample could be a complete coincidence as a consequence we normally adjust the number of degrees of freedom to 1. In the case of a single sample, the value is 0 divided by 0 which is undefined (we cannot say if a measurement is accurate or consistent with a single result).
The variance cannot be directly related to the unit being measured as its dimensionality is in units squared. As a result, we normally take the square root of the variance known as the standard deviation which has the same dimensionality as the units:
We can use the numpy functions np.var() and np.std() to calculate these. Note both of these have the additional keyword argument ddof which assigns the degrees of freedom. Unfortunately its default value is 0 and not 1.
import numpy as np v1=np.array([4,9,7]) v2=np.array([5,1,2]) m1=np.array([v1,v2]) std_v1=np.std(v1) std_m1=np.std(m1) std_m1_axis0=np.std(m1,axis=0,ddof=1) std_m1_axis0=np.reshape(std_m1_axis0,(1,std_m1_axis0.size)) std_m1_axis1=np.std(m1,axis=1,ddof=1) std_m1_axis1=np.reshape(std_m1_axis1,(std_m1_axis1.size,1))
Supposing we have the following array:
import numpy as np m=np.array([[1,3,4,7], [2,6,9,11], [5,8,100,12]])
We can see immediately that the value 100 is a clear outlier, in this case it is a deliberate type from 10. We can calculate its mean with the typo and without the typo:
import numpy as np m=np.array([[1,3,4,7], [2,6,9,11], [5,8,100,12]]) mean_m=np.mean(m) mean_m_axis0=np.mean(m,axis=0) mean_m_axis0=np.reshape(mean_m_axis0,(1,mean_m_axis0.size)) mean_m_axis1=np.mean(m,axis=1) mean_m_axis1=np.reshape(mean_m_axis1,(mean_m_axis1.size,1))
import numpy as np m=np.array([[1,3,4,7], [2,6,9,11], [5,8,10,12]]) mean_m=np.mean(m) mean_m_axis0=np.mean(m,axis=0) mean_m_axis0=np.reshape(mean_m_axis0,(1,mean_m_axis0.size)) mean_m_axis1=np.mean(m,axis=1) mean_m_axis1=np.reshape(mean_m_axis1,(mean_m_axis1.size,1))
Here we see the typo, the outlier substantially skews the data in this low sample. It is worthwhile comparing the median to the mean in dample sets of low data as it is less skewed by outliers. The median applies a sort of the data and then returns the middle value (or mean of the middle values in the case of an evenly sized data set). If we apply this typo and remove it:
import numpy as np m=np.array([[1,3,4,7], [2,6,9,11], [5,8,100,12]]) median_m=np.median(m) median_m_axis0=np.median(m,axis=0) median_m_axis0=np.reshape(median_m_axis0,(1,median_m_axis0.size)) median_m_axis1=np.median(m,axis=1) median_m_axis1=np.reshape(median_m_axis1,(median_m_axis1.size,1))
import numpy as np m=np.array([[1,3,4,7], [2,6,9,11], [5,8,10,12]]) median_m=np.median(m) median_m_axis0=np.median(m,axis=0) median_m_axis0=np.reshape(median_m_axis0,(1,median_m_axis0.size)) median_m_axis1=np.median(m,axis=1) median_m_axis1=np.reshape(median_m_axis1,(median_m_axis1.size,1))
We can see that the median with the outlier is more accurate representation of the mean without the outlier than the mean with the outlier.
We seen earlier the function np.sum(), we can also calculate the cumulative sum using the function np.cumsum(). Like the function sum() the keyword argument axis=None by default and in the case of a vector it doesn't matter is set to the only axis=0 or the last axis=-1:
This will create the cumulative sum across a vector:
When we look at matrix, like the function np.sum() we know that axis=None concatenates the matrix row by row before performing the operation