This website is maintained by an individual and technology enthusiast, Philip Yip. Although I have been recognised as a Dell Community Rockstar and Microsoft MVP, I am affiliated with neither company. If you've found my tutorials helpful, please consider making a one-time small donation to offset the WordPress Premium Plan costs to host the website.
This website is maintained by an individual and technology enthusiast, Philip Yip. Although I have been recognised as a Dell Community Rockstar and Microsoft MVP, I am affiliated with neither company. If you've found my tutorials helpful, please consider making a monthly small donation to offset the WordPress Premium Plan costs and the costs for buying computer hardware. I am also spending a considerable amount of time doing some programming courses (Python, C++, Qt and Arduino) and hope to write some more programming tutorials.
Choose an amount
Thank you very much.
Thank you very much.
DonateDonate monthlyThe Numeric Python Library
This guide will look at the use of the numpy which is an abbreviation for Numeric Python. This library is built around the ndarray data structure which can be thought of as a numeric array such as a vector, matrix or book or higher order array. For simplicity we will mainly be focusing with matrices.
In this guide I will use the Anaconda 2021-11 Python Distribution with the Spyder 5 IDE to look at the core concepts behind this Python library. If you haven't already made yourself comfortable with the Spyder 5 IDE, and the Python Programming Language, please check out my other guides before starting this guide to get the most out of this guide:
Inbuilt Python data structures
int or float
In Python we have two fundamental numeric datatypes, the int and the float which are full numbers and decimal numbers respectively. In real-life these are usually defined as a physical quantity that is completely described by its magnitude. For example a scalar for length might be defined as a metre and every number we use for distance is referenced with respect to the scale of 1 metre. For example we can create the variables num1 as 1 metre and num2 as 2 metre:
num1 = 1 num2 = 2
When we run this script we can see the values display on the variable explorer:
If we type in one of the object names for example num1 followed by a dot . we see a list which includes a number of numeric functions that can be called from the object as well as properties that can be read-off the object.
num1.
These functions and properties are defined in the objects class. The class can be conceptualised as a blueprint which effectively gives instructions on storing the data within the data-structure and defines methods (functions) for interacting with the data and attributes (property) which can be read off the data.
In real-life, multiple objects are usually created from a common class known as a blueprint. The blueprint itself is a conceptual idea however real objects are usually constructed from the blueprint. If the blueprint for example is for a house, multiple houses may be created that are all initially identical except for the postcode, which is assigned during the houses construction (or initialisation). Each house constructed is a physical object known as an instance of the house class. Every house constructed will have attributes such as house height and the number of rooms in the house. Each house will also have some form of functionality, for example the front door can be opened or closed and the temperature on the central heating system can be changed. These are known as methods for interacting with the house. The way the door and central heating system work are defined in the blueprint of the house.
dist_1 and dist_2 as you can see in the variable explorer are both instances of the int class. If we type in the class name int followed by a dot . we get the same list of numeric functions and objects:
int.
If we scroll down we can see that many of these functions begin and end with a double underscore and therefore are colloquially known as double underscore (dunder) methods or more formally datamodel methods:
Let's have a look at the __add__ datamodel method.
If we call __add__ as a method from the int class, because the class is a conceptual idea. In order to actually use the method we need to first select an instance self. Going back to the analogy of a house and using the method to open the door, we are looking at the blueprint and now need to physically select a house (an instance of the house class) so we have a physical door to open.
In the case of add we also need a second instance x in order to add self and x:
If we call __add__ from an instance in this case num1, then the instance self is already implied.
Going back to the analogy of the house, we are in a house and we are going to open its door.
Now we only need to specify the other instance x in order to add self and x:
As we can see the following are all equivalent:
int.__add__(num1, num2) num1.__add__(num2) num1 + num2
The datamodel method __add__ defined in the class int defines the behaviour of the + operator.
If we use the dir function to look at the directory of num1 in the console we can scroll through this list here.
The following datamodel methods map to the operators:
method | operator |
__add__ | + |
__sub__ | – |
__mul__ | * |
__floordiv__ | // |
__mod__ | % |
__truediv__ | / |
__eq__ | == |
__ge__ | >= |
__gt__ | > |
__le__ | <= |
__lt__ | < |
__ne__ | != |
If we now create:
num3 = 1.2
We see in the variable explorer this is a float:
The datamodel methods for an object from the float class are different to an object from the int class as they are both built from different blueprints. However both are numeric so many of the functions have a similar definition:
string
The above numeric data-structures consisted of only a single number. In Python we have the string datatype which can consist of multiple characters. Let's create two simple str:
text1 = "hello" text2 = "world"
Notice how this is a completely different blueprint and all the methods that can be called from it are to do with text manipulation:
There are some datamodel methods:
However these are setup for text manipulation. If we use the __add__ datamodel method for example:
str.__add__(text1, text2) text1.__add__(text2) text1 + text2
We see concatenation takes place i.e. an output str with the contents of text1 and then the contents of text2 (with no space character added).
list and tuple
We can use a mutable list or immutable tuple to store a collection of numeric values. Each individual value is a scalar however as part of a collection, each value also has a direction (assigned by the index of the collection).
nums1 = [1, 2, 3, 4, 5] nums2 = [2, 4, 6, 8, 10]
The list of functions and objects that can be called from a list object are geared to modifying the contents of the list, for example; extend, insert, pop:
The datamodel methods are listed such as __add__
The list blueprint has similar instructions to the str blueprint for this method and concatenation is carried out with the output being a new list with the contents of nums1 and then the contents of num2 without any spaces:
It does not perform numeric addition like seen in the int class, to do so we need to setup a for loop:
nums1 = [1, 2, 3, 4, 5] nums2 = [2, 4, 6, 8, 10] nums3 = [] for idx, val in enumerate(nums1): nums3.append(nums1[idx] + nums2[idx])
One of the reasons behind this behaviour is that a list has flexibility can can contain different datatypes, while advantageous in many cases this can also be problematic. For example if one of the list elements is accidently converted into a str, then the for loop fails because the str class and the int class as we have just seen have a different definition in their blueprint for the __add__ datamodel method and thus when we use + operator, Python doesn't know what we want to do and gives us this error:
nums1 = [1, 2, 3, 4, "5"] nums2 = [2, 4, 6, 8, 10] nums3 = [] for idx, val in enumerate(nums1): print(nums3) nums3.append(nums1[idx] + nums2[idx])
We can create lists of lists to represent higher numeric data. In this case the outer list has 2 elements and each inner list has 5 elements. This gives a vague semblance of a matrix however if wanted to add two equally sized matrices together in this form the code would be a lot more complicated involving nested for loops:
nums3 = [[1, 2, 3, 4, 5], [2, 4, 6, 8, 10]]
The tuple has an almost identical form to a list except uses ( ) opposed to [ ] and is immutable:
nums1 = [1, 2, 3, 4, 5] nums4 = (1, 2, 3, 4, 5)
This can be seen by looking at the methods that can called from a tuple. They are a subset of ones available from a list and are only the ones for reading properties and not editing or mutating the data structure:
Both lists and tuples can be indexed using square brackets:
nums1[0] nums4[0]
In the case of num3, which had a list of lists. We must first index an element in the outer list and then index an element in the inner list. Because we are indexing using two separate stages, each index must be enclosed in its own square brackets:
dict
A dict is another commonly used Python collection. Instead of a numeric index, it normally uses an index of strings. Each unique string in the index is known as a key and each key is assigned a value. This makes the structure analogous to an English dictionary and is where it gets its name. The value can be an int, float, string or another collection.
nums5 = {"nums1": [1, 2, 3, 4, 5], "nums2": [2, 4, 6, 8, 10]}
If we examine the methods coming from a dict, they are similar to those in a list as both collections are mutable but also differ slightly as they are different data structures defined by separate blueprints and the dict has key: value pairs opposed to just values in the case of a list:
The numpy array
importing numpy
To use the numpy library, we need to import it using:
import numpy as np
A 2 letter abbreviation is typically used as this library is typically referenced multiple times.
Recall that this references the numpy subfolder found within site-packages:
$UserProfile%\Anaconda3\lib\site-packages\numpy\__init__.py
Specifically the __init__.py file within this folder.
Because this is not a standard Python library but a third-party Data Science library, we need to import the library into the kernel in order to view code-completion for this library:
We can now type:
np.
to view the list of objects and functions we can call from the numpy library (imported with alias np):
This is a large list, you can scroll through most of the capitalised classes and variables and focus more on the lower case variables and functions:
constants
The numpy library has 2 constants, e, pi. There is also a representation for not a number NaN and a representation for infinity inf:
import nupmy as np num1 = np.e num2 = np.pi num3 = np.NaN num4 = np.inf
creating arrays
In numpy we have the ndarray class. The class can be used to directly create a numpy array by supplying the shape of the array as a tuple alongside the datatype of the array. In this case let's create one with 2 rows and 3 columns and use the int datatype:
import numpy as np nums1 = np.ndarray((2, 3), int)
The array created has garbage values which need to be changed to something desired by the user:
Notice that the ndarray has a datatype expressed in the variable explorer that is consistent across all the elements in the array. Because the datatype is a single collection it can be indexed using a single set of square brackets containing the row and column to be indexed respectively. This is different to the case where we had nested lists and had to index into the outer list and then could index into the inner list:
nums1[0,0] = 1 nums1[0,1] = 2 nums1[0,2] = 3 nums1[1,0] = 4 nums1[1,1] = 5 nums1[1,2] = 6
Each element can be assigned a numeric value and now notice because the ndarray is setup to contain only numeric data, it can interact with a number, in this case the number 2 can be added to every element in the array and stored to an output variable which is in turn an ndarray:
nums2 = nums1 + 2
nums1 is an instance of the ndarray class and it contains a number of functions (methods) and properties (attributes):
Which are defined in the ndarray class:
Note that the numpy library has a large number of functions which have a similar name to these methods. Sometimes they behave almost identically however sometimes the methods in the ndarray class directly mutate the instance of the ndclass in a similar way to how list methods mutate a list inplace. For a ndarray, the list method sort mutates the ndarray and has no return statement but the list method reshape does not mutate the ndarray and instead has a return statement. This inconsistency can be a bit confusing and therefore it is usually recommended to work with the functions from the numply library directly as most of them don't mutate the array directly and have a return statement:
In numpy it is also far more common to use the array function to create a new instance of the ndarray than using the ndarray class directly like we done above. The important positional input arguments is object which is generally a list for a 1d array or a list of equally sized lists for a 2d array. The dtype is usually automatically determined by the most complex datatype of the object however can be manually assigned:
Let's create a simple array:
import numpy as np nums1 = np.array([1, 2, 3, 4])
The datatype in this case is int32, determined as all elemets in the array were int. The size is returned as a single element tuple (4,). Note that the tuple is not (4,1) or (1,4) and therefore shows as a row under value in the variable explorer but as a column when this is opened in a separate window.
Let's update the 0th index to be 1.2 instead of 1. Now the datatype is determined to be float64.
import numpy as np nums1 = np.array([1.2, 2, 3, 4])
We can force these to be a different datatype by specifying the dtype as int for example:
import numpy as np nums1 = np.array([1.2, 2, 3, 4], dtype=int)
This truncates the value 1.2 to 1 in the 0th index:
Let's now look at creating a 2d array or matrix using a list of equally sized lists as an object:
import numpy as np nums1 = np.array([[1, 2, 3, 4, 5], [2, 4, 6, 8, 10]])
Notice that the shape is (2, 5) which is denoted by the tuple in the variable explorer and corresponds to 2 rows and 5 columns respectively. The object we used to make the 2d array was an outer list with 2 inner lists of size 5.
A practical application for 2d arrays is a black and white image. A black and white image on a consists of pixels. Each pixel is a tiny square that represents an int from 0-255 with 0 being the absence of light and 1 being a bright pixel.
We can make a higher dimensional array by making a list of equally sized lists (of equally sized lists). This becomes a bit cumbersome however Spyders syntax-highlighting will help you identify matching brackets:
import numpy as np nums1 = np.array([[[1, 1, 1, 1, 1], [2, 2, 2, 2, 2]], [[3, 3, 3, 3, 3], [4, 4, 4, 4, 4]], [[5, 5, 5, 5, 5], [6, 6, 6, 6, 6]]])
Now the shape is (3, 2, 5) indicating 3 pages, 2 rows and 5 columns. Notice that the object used to create the numpy array was an outer list with 3 elements. Each element in that list was a list of 2 elements and each element in those lists had 5 elements. The highest dimension of the array is expressed on the left hand side:
In the variable explorer each page displays as a separate matrix and the index can be scrolled through:
nums1[0, :, :]
nums1[1, :, :]
nums1[2, :, :]
You can see slicing in square brackets. We index into a numpy array using a similar procedure to indexing in a list however we only need to use one set of square brackets because we have a data-structure that is a single array and not a list of other lists.
For example, selecting the 3rd (index 2 as we count from 0) page, 2nd (index 1 as we count from 0) row and all elements (:) will give:
nums1[2, 1, :]
A practical application for 3d arrays is a colour image. A colour image consists of pixels that each have three channels, a red channel, a green channel and a blue channel. The mixture of int values in each channel is registered as a colour by our brain which performs colour-mixing using ratios from the three different colour sensitive cones in the eye:
We have a number of functions which can be used to quickly generate an array such as zeros and ones which each take in a tuple of dimensions to create an array of zeros and ones respectively. In the case of ones, a scalar can multiply the array to obtain an array of any desired constant value. Both zeros and ones default to a float64 datatype but the datatype can be specified as an optional secondary positional input argument if desired:
import numpy as np nums1 = np.zeros((4,)) nums2 = np.ones((4,)) nums3 = 3 * np.ones((4,)) nums4 = np.zeros((3, 2, 5), int)
The zeros function is often used to initialise higher order dimensional arrays. The book created before can be initialised to a zeros array and three pages can be separately created as matrices. We can index into each page of the book and update it to the values in the respective matrix:
import numpy as np book = np.zeros((3, 2, 5), int) page0 = np.array([[1, 1, 1, 1, 1], [2, 2, 2, 2, 2]]) page1 = np.array([[3, 3, 3, 3, 3], [4, 4, 4, 4, 4]]) page2 = np.array([[5, 5, 5, 5, 5], [6, 6, 6, 6, 6]]) book[0, :, :] = page0 book[1, :, :] = page1 book[2, :, :] = page2 del(page0) del(page1) del(page2)
For a 4d array we can imagine a single shelf of equally sized books. In this case of 8 equally sized volumes. Let's write code to fill up 2 of these 8 volumes:
import numpy as np shelf = np.zeros((8, 3, 2, 5), int) book0 = np.array([[[1, 1, 1, 1, 1], [2, 2, 2, 2, 2]], [[3, 3, 3, 3, 3], [4, 4, 4, 4, 4]], [[5, 5, 5, 5, 5], [6, 6, 6, 6, 6]]]) book1 = np.array([[[7, 7, 7, 7, 7], [8, 8, 8, 8, 8]], [[9, 9, 9, 9, 9], [10, 10, 10, 10, 10]], [[11, 11, 11, 11, 11], [12, 12, 12, 12, 12]]]) shelf[0] = book0 shelf[1] = book1
The variable explorer doesn't support viewing such a complicated array:
However a practical application of this array type is a colour video which essentially consists of a series of frames and each frame is a 3d array as we seen before.
A higher dimensional array can be imagined as bookshelves where each shelf has an equal number of books and each book has the same number of pages and each page has the same number of rows and columns…
A higher dimensional array can be imagined as a library of bookshelves…
A higher dimensional array can be imagined as several of these libraries…
I will not delve further into such complexity in this guide.
A number of functions are available to create equally spaced arrays. The arange or in full the array range function is very similar to the range object in Python except it returns a numpy array opposed to a arange object.
Like the range function it uses zero-order based indexing and is exclusive of the top bound. All the following lines of code are equivalent:
import numpy as np nums1 = np.arange(start=0, stop=10, step=1) nums2 = np.arange(0, 10, 1) nums3 = np.arange(start=0, stop=10) nums4 = np.arange(0, 10) nums5 = np.arange(stop=10) nums6 = np.arange(10)
The datatype will be dependent on the values input, if all three start, stop and step are int the datatype will be int otherwise the datatype will be float:
linspace produces equally spaced numbers. Note the datatype is always going to be an array of floats as it uses the truediv to calculate the values. Comparison of arange (zero-order indexing and exclusive of the top bound) and linspace (inclusive of the bottom and top bound):
import numpy as np nums1 = np.arange(start=-5, stop=5, step=1) nums2 = np.linspace(start=-5, stop=5, num=11)
There is also a logspace function which gives logspaced datapoints for example:
import numpy as np nums1 = np.logspace(start=1, stop=4, num=4)
axis, shape and reshape
Inbuilt Python objects only have a size and do not have a shape. The variable explorer in Spyder states size but in the case of numpy arrays will display the shape instead as it is more useful. There is a shape and size function that we can use to get the dimensions of an array which can be useful if constructing something like a for loop:
import numpy as np matrix = np.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) msize = np.size(matrix) mshape = np.shape(matrix) nrows = np.shape(matrix)[0] ncols = np.shape(matrix)[1]
If we have a 3d array, we would need to update the index in line 11 and 12 to account for the new higher order dimension added to the left hand side of the shape tuple:
import numpy as np book = np.array([[[1, 1, 1, 1, 1], [2, 2, 2, 2, 2]], [[3, 3, 3, 3, 3], [4, 4, 4, 4, 4]], [[5, 5, 5, 5, 5], [6, 6, 6, 6, 6]]]) msize = np.size(book) mshape = np.shape(book) npages = np.shape(book)[0] nrows = np.shape(book)[1] ncols = np.shape(book)[2]
As a result the axis corresponding to columns for example changes each time the array grows in dimensionality. It is therefore more convenient to use the negative index of each axis, which for the case of columns is always -1 and for the case of rows is always -2 and so on:
(nrows, ncols) | (axis=0, axis=1) | (axis=-2, axis=-1) |
(npages, nrows, ncols) | (axis=0, axis=1, axis=2) | (axis=-3, axis=-2, axis=-1) |
(nbook, npages, nrows, ncols) | (axis=0, axis=1, axis=2, axis=3) | (axis=-4, axis=-3, axis=-2, axis=-1) |
import numpy as np matrix = np.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) msize = np.size(matrix) mshape = np.shape(matrix) nrows = np.shape(matrix)[-2] ncols = np.shape(matrix)[-1]
import numpy as np book = np.array([[[1, 1, 1, 1, 1], [2, 2, 2, 2, 2]], [[3, 3, 3, 3, 3], [4, 4, 4, 4, 4]], [[5, 5, 5, 5, 5], [6, 6, 6, 6, 6]]]) msize = np.size(book) mshape = np.shape(book) npages = np.shape(book)[-3] nrows = np.shape(book)[-2] ncols = np.shape(book)[-1]
Let's create a numpy array with 16 elements using the arange function. We can resize this to a 4 by 4 matrix or to a 2 by 8 matrix or a 8 by 2 matrix. We can also flatten out one of these matrices to return to a single axis array. To select all elements in the shape tuple we can use -1:
import numpy as np nums1 = np.arange(16) m44 = np.reshape(nums1, (4, 4)) m28 = np.reshape(nums1, (2, 8)) m82 = np.reshape(nums1, (8, 2)) flat = np.reshape(m44, (-1,))
Notice that the reshaping has populated the values along the -1st axis of the array (along the columns) and then along the -2nd axis of the array (along the rows). We can also reshape to a 3D array:
import numpy as np nums1 = np.arange(16) b224 = np.reshape(nums1, (2, 2, 4))
Notice once again that the reshaping has populated the values along the -1st axis of the array (along the columns) and then along the -2nd axis of the array (along the rows) and then the -3rd axis of the array (along the pages).
We can transpose a matrix which essentially reverses the shape tuple. In the case of a matrix the number of rows and columns are switched:
import numpy as np nums1 = np.arange(16) m44 = np.reshape(nums1, (4, 4)) m44t = np.transpose(m44)
Transposing also can be used in an array with higher dimensions however it is more difficult to conceptualise:
import numpy as np nums1 = np.arange(16) b224 = np.reshape(nums1, (2, 2, 4)) b224t = np.transpose(b224)
We can also use fliplr and flipud to flip a matrix left right and up or down respectively:
import numpy as np nums1 = np.arange(16) m44 = np.reshape(nums1, (4, 4)) m44lr = np.fliplr(m44) m44ud = np.flipud(m44)
There is also the function flip which requires the additional keyword input argument axis. As mentioned rearlier it is recommended to think of axis with respect to negative indices. Therefore to get equivalent to fliplr and act upon columns we should select axis=-1 and to get equivalent behaviour to flipud we should use axis=-2 to work on rows:
import numpy as np nums1 = np.arange(16) m44 = np.reshape(nums1, (4, 4)) m44lr = np.flip(m44, axis=-1) m44ud = np.flip(m44, axis=-2)
A 1d array for example vec has a tuple of (4, ) and is therefore neither classified as a row or column.
import numpy as np vec = np.array([1, 2, 3, 4, 5])
In some operations for example concatenation or array multiplication we will need it explicitly to be one or the other. To get this behaviour we can use reshape. In the case of a row, we want 1 row and every item in the array to be displayed in a different column i.e. to be the size of all elements in the array -1. We can therefore use the tuple (1, -1) and likewise for a column we want to use the tuple (-1, 1):
import numpy as np vec = np.array([1, 2, 3, 4, 5]) cvec = np.reshape(vec, (-1, 1)) rvec = np.reshape(vec, (1, -1))
A vector can also be converted to a column vector or row vector via the use of np.newaxis. Essentially we index into the existing 1D array using all elements and we use np.newaxis to insert a new axis. This axis should be inserted at the end i.e. the -1 index for a column vector and at -2 index for a row vector:
import numpy as np vec = np.array([1, 2, 3, 4, 5]) cvec = vec[:, np.newaxis] rvec = vec[np.newaxis, :]
The datamodel methods in numpy arrays are setup for numeric operations and as we seen earlier the + operator performs addition and not concatenation. To use concatenation, we instead have a concatenation function where we supply all the arrays we want to concatenate along a given axis as a tuple and then select the axis to concatenate along.
Let's create a square matrix and concatenate it to a row and column vector respectively
import numpy as np matrix = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) vec = np.array([11, 12, 13]) rvec = vec[np.newaxis, :] rmatrix = np.concatenate((rvec, matrix), axis=-2)
import numpy as np matrix = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) vec = np.array([11, 12, 13]) cvec = vec[:, np.newaxis] cmatrix = np.concatenate((cvec, matrix), axis=-1)
And if we want a flattened array:
import numpy as np matrix = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) vec = np.array([11, 12, 13]) flat = np.concatenate((vec, matrix), axis=None)
element by element operations
We can carry out most the numeric operations used for a single into or float element by element in a ndarray. The addition +, subtraction -, * multiplication, // floor division, % modulo, ** exponentiation and / float division for example:
import numpy as np m1 = np.array([[1, 2, 3], [4, 5, 6]]) m2 = np.array([[3, 2, 1], [6, 5, 4]]) m3 = m1 + m2 m4 = m1 - m2 m5 = m1 * m2
We can also interact a matrix with a scalar:
import numpy as np m1 = np.array([[1, 2, 3], [4, 5, 6]]) m3 = m1 + 2 m4 = m1 - 2 m5 = m1 * 2
and in such a case, the scalar is expanded to match the same dimensions of the matrix:
A row-vector can likewise be expanded across each row:
import numpy as np m1 = np.array([[1, 2, 3], [4, 5, 6]]) rvec = np.array([1, 2, 3])[np.newaxis, :] m3 = m1 + rvec m4 = m1 - rvec m5 = m1 * rvec
A column-vector can be expanded across each column:
import numpy as np m1 = np.array([[1, 2, 3], [4, 5, 6]]) cvec = np.array([1, 2])[:, np.newaxis] m3 = m1 + cvec m4 = m1 - cvec m5 = m1 * cvec
There are a number of other operations that operation element by element such as the abs function and the negative function:
import numpy as np m1 = np.array([[-1, 2, -3], [4, -5, 6]]) m2 = np.abs(m1) m3 = np.negative(m1)
The round function can be used to round float numbers to the nearest integer or optional desired number of decimal places and the ceil function can be used to round to the next highest int. Creating a new array and assigning the datatype to int will truncate the decimal component:
import numpy as np m1 = np.array([[-1.15, 2.14, -3.1], [4, -5.1 , 6]]) m2 = np.round(m1) m3 = np.round(m1, 1) m4 = np.ceil(m1) m5 = np.array(m1, int)
For a complex array, the complex conjugate can be found calculated using the conjugate function which flips the sign of the imaginary component element by element.
import numpy as np m1 = np.array([[1+4j, 9-3j, 7+2j], [2-1j, -5+1j , 6]]) m1conj = np.conjugate(m1)
Any complex number can be plotted as an angle on a real, imaginary plane and the angle (in radians) can be calculated using the angle function. This can be converted to degrees using the rad2deg function which again works element by element. The deg2rad performs the opposite task.
import numpy as np m1 = np.array([[1+4j, 9-3j, 7+2j], [2-1j, -5+1j , 6]]) angles = np.angle(m1, deg=False) angles_d = np.rad2deg(angles) angles_r = np.deg2rad(angles_d)
statistical functions
Inbuilt Python has the functions max and min which work with inbuilt collections such as lists and tuples. However when used with a numpy array you will get the error message:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Instead we should use the numpy function amax and amin (array max and array min) respectively. Let's create a matrix m1.
import numpy as np m1 = np.array([[1, 9, 7], [2, -5 , 6]])
Recall that when we used axis with regards to shape, -1 corresponded to the columns and -2 corresponded towards the rows. If we select axis=-2 we are going to create an output array where this axis is collapsed.
So for column 0 we are going to select the value 2 which was in row index 1, for column 1, we are going to select 9, which was in row index 0 and for column 2 we are going to select 7 which was in row 0. This gives the array np.array([2. 9, 7]) for the return value of amax and the np.array([1, 0, 0]) for the return value of argmax which finds the indices of the maximum values along the array.
The array given has a shape of (3,) which is neither a column or a row. In this example it makes sense to reshape it as a row so it looks as expected:
import numpy as np m1 = np.array([[1, 9, 7], [2, -5 , 6]]) rmax = np.amax(m1, axis=-2)[np.newaxis, :] rmaxidx = np.argmax(m1, axis=-2)[np.newaxis, :]
Likewise for axis=-1 we are going to create an output array where this axis is collapsed. So for row 0, the maximum value is 9, which was in column 1and for row 1, the maximum value is 6 which was in row 2:
import numpy as np m1 = np.array([[1, 9, 7], [2, -5 , 6]]) cmax = np.amax(m1, axis=-1)[:, np.newaxis] cmaxidx = np.argmax(m1, axis=-1)[:, np.newaxis]
If the axis is set to None then the array is collapsed and the maximum value is returned as an int or float and the index is returned as an int. In this case a maximum value of 9 which was at index 1:
import numpy as np m1 = np.array([[1, 9, 7], [2, -5 , 6]]) flat = np.reshape(np.concatenate((m1,), axis=None), (1, -1)) maxval = np.amax(m1, axis=None) maxvalidx = np.argmax(m1, axis=None)
amin and argmin work in a similar manner except find the minimum value and index of the minimum value along an axis respectively. A min and max function are also available from the numpy library, these act as an alias (another name) for amin and amax respectively. The terminology amin and amax are preferred as it gives a more clear indication of working with an array.
Instead of collapsing an array by an axis to get the maximum values along that axis or the minimum values along that axis, we can instead sort the elements along the axis. This will give an array of equal dimensions to the original array. For example we can sort by rows by selecting axis=-2:
import numpy as np m1 = np.array([[1, 9, 7], [2, -5 , 6]]) rsort = np.sort(m1, axis=-2) rargsort = np.argsort(m1, axis=-2)
Notice that the last row in m1rsort and m1rargsort are identical to the cmax and cmaxidx created earlier. This is because this is the last row consisting of the maximum values. The first row (zero-order indexing) is equivalent to cmin and cminidx.
We can sort by columns by using axis=-1 and collapse and sort the collapsed array by selecting axis=None as previously discussed.
numpy has a number of functions which carry out basic statistics. All of the functions below have a similar style to amax and amin with one of the keyword input arguments being the axis parameter and the negative value for the axis being preferred where -1 means working along the column axis and -2 means working along the row axis and an axis of None collapses the array with the statistical function being ran on the collapsed array. In this case I will just demonstrate these by acting along the column axis (axis=-1) which will result in a vector that I will reshape into a column vector:
import numpy as np m1 = np.array([[1, 9, 7], [2, -5 , 6]]) csum = np.sum(m1, axis=-1)[:, np.newaxis] cprod = np.prod(m1, axis=-1)[:, np.newaxis] cmean = np.mean(m1, axis=-1)[:, np.newaxis] cmedian = np.median(m1, axis=-1)[:, np.newaxis] cvar = np.var(m1, axis=-1, ddof=1)[:, np.newaxis] cstd = np.std(m1, axis=-1, ddof=1)[:, np.newaxis]
The var and the square root of the variance, the standard deviation have an additional keyword input argument delta degrees of freedom (ddof). This is used to modify the divisor, ddof =1 means we divide by N-1 (where N is the number of datapoints). The std is usually preferred over the var as it has the same units as the mean of the values (the var has squared units).
The mean and the std are usually the preferred terms to describe a dataset however if the dataset has large outliers and a small sample size it can skew the mean and therefore the median and the std may be a more accurate representation of the dataset.
The prod is the multiplication of values along an axis.
import numpy as np m1 = np.array([[True, True, False], [True, True, True]]) all_ax2 = np.all(m1, axis=-1)[:, np.newaxis]
The following statistical functions output another matrix, in the case of the cumulative sum (cumsum) and cumulative product (cumprod), the dimensions of the output matrix are identical to the original matrix however in the case of diff the size is 1 less along the axis, the function was run:
import numpy as np m1 = np.array([[1, 9, 7], [2, -5 , 6]]) c_cumsum = np.cumsum(m1, axis=-1) c_cumprod = np.cumprod(m1, axis=-1) c_diff = np.diff(m1, axis=-1)
The correlation coefficient can be used to determine if two rows in a matrix correlate to each other.
The output correlation matrix coef1 shows the correlation efficient for the ith row with respect to the jth row in the original matrix arr1. The diagonals therefore compare a row to itself and will always be 1 as a row is always perfectly correlated with itself.
In the off-diagonal to the bottom left we are examining the correlation of the first row (index 0) with the second row (index 1) row of arr1.
For this example we can see that the values are almost identical and therefore these also have an almost perfect correlation (close to 1). i.e. a change in value in the 0th row will likely strongly influence a value on the 1st row.
The data is better visualised as a graph (the plot commands won't be further discussed in this guide):
import numpy as np arr1 = np.array([[1, 2, 3], [1.01, 2.02, 3.03]]) coef1 = np.corrcoef(arr1) import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.figure(1) plt.scatter(arr1[0,:], arr1[1,:])
If we change the data in the second row, so that it barely changes. Then it is essentially uninfluenced by any changes in the data in the first row and therefore seen as not correlated (has a correlation close to 0).
import numpy as np arr2 = np.array([[1, 2, 3], [0.99, 1.02, 0.99]]) coef2 = np.corrcoef(arr2) import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.figure(2) plt.scatter(arr2[0,:], arr2[1,:])
Finally if we change the data in the second row so it has the opposite trend to the data in the first row, we see that the data is strongly negative correlated (has a correlation close to -1):
import numpy as np arr3 = np.array([[1, 2, 3], [3.03, 2.02, 1.01]]) coef3 = np.corrcoef(arr3) import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.figure(3) plt.scatter(arr3[0,:], arr1[3,:])
array multiplication
The * operator performs element by element multiplication and the @ operator performs array multiplication. In element by element multiplication the arrays must have equal dimensions however scalar expansion or vector expansion can be used in the case of an interaction with a scalar and a matrix or a vector and a matrix respectively.
We can multiply two arrays that have a matching inner dimension.
In the case of 2d arrays, the number of columns of the 1st array must match the number of rows of the 2nd array. The generated output array has the number of rows of the 1st array and the number of columns of the 2nd array. We can multiply the row vector v1 and column vector v2 v1@v2:
This is better associated with a physical example. In the above imagine v1 as a list of items bought from a shop and v2 as the price of items in the shop. The result 11 is the price you pay at the till.
We can also multiply the column vector v2 and the row vector v1 v2@v1:
Once again let's associate with a physical example. In the above imagine v2 as a list of stationary and v1 as a list of storage locations. The output array gives the quantity of each item at the storage location.
import numpy as np v1 = np.array([1, 2])[np.newaxis, :] v2 = np.array([3, 4])[:, np.newaxis] inner = v1 @ v2 outer = v2 @ v1
In the case above, we explicitly setup v1 to be a row vector and v2 to be a column vector so we could use the @ operator. Recall that by default a vector is neither a row or a column. There are to additional functions outer and inner which will work with such vectors to perform outer and inner multiplication respectively:
import numpy as np v1 = np.array([1, 2]) v2 = np.array([3, 4]) inner = np.inner(v1, v2) outer = np.outer(v2, v1)
It is possible to multiply higher order arrays if the inner dimensions match however becomes harder to visualise:
import numpy as np book = np.array([[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]], [[13, 14, 15, 16], [17, 18, 19, 20], [21, 22, 23, 24]]]) col = np.array([0.1, 0.2, 0.3, 0.4])[:, np.newaxis] bookatcol = book @ col
square matrices
Array division is not as straight-forward as array multiplication. Let's look at the example we had earlier when we multiplied a row vector by a column vector. Let's set the values in the row vector to x and y:
This multiplies out as a single equation:
1*x + 2*y = 11
With two unknowns… and there is not enough information to solve the equation.
To find a solution for n unknown variables we need n unique equations
and therefore we need a square matrix:
To solve this we can multiply both sides through by the inverse matrix:
A matrix multiplied by its inverse gives the identity matrix:
And anything multiplied by the identity matrix remains unchanged:
Therefore our unknown coefficients are x=-0.2 and y=5.6:
This is different from x=3 and y=4 however some linear systems of equations have multiple solutions.
We can create the identity matrix by using the eye function. As the matrix is going to be square, we only need a single dimension. The diag function, can be used to read off a diagonal of a square matrix or to create a matrix from a vector where all other elements are 0:
import numpy as np eye2 = np.eye(2) diagonal = np.diag(eye2) eye2b = np.diag(diagonal)
If we use the same example above then we can call our linear systems of equations matrix a, our unknown coefficients c and the values to our linear systems of equations as b:
Thus:
a@c=b
And we want to calculate c
import numpy as np a = np.array([[1, 2], [3, 1]]) b = np.array([11, 10])[:, np.newaxis]
The functions for solving a linear set of equations are compartmentalised into the linalg module of the numpy library. We generally import this separately using:
import numpy.linalg as linalg
Physically this is a reference to the __init__.py file, this time in the subfolder linalg:
$UserProfile%\Anaconda3\lib\site-packages\numpy\linalg\__init__.py
These functions and others are also included in the linalg module of the scipy library. Scientific Python (scipy) can be thought of as an extension of numpy with additional functionality for as the name suggest, scientific applications.
We can calculate ainv and as we seen:
c = ainv @ b
This gives the same results as shown above.
We can also use the solve function from the linalg module to solve for c directly using a and b:
import numpy as np import numpy.linalg as linalg a = np.array([[1, 2], [3, 1]]) b = np.array([11, 10])[:, np.newaxis] c = linalg.solve(a, b)
Solving a system of equations like the above is commonly done when interpolating datapoints. Supposing we have the following data and we want to interpolate the velocity value at 16 s:
time (s) | velocity (m/s) |
0 | 0 |
10 | 227.04 |
15 | 362.78 |
20 | 517.35 |
22.5 | 602.97 |
30 | 901.67 |
We could take the nearest 1 datapoint at 15 s and our nearest neighbour interpolation is the velocity at this datapoint 362.78 m/s.
Alternatively we could take the two nearest datapoints and construct a linear system of equations which has the form:
c0 + c1 * x = y
In our case x is the time and y is the velocity and we need to calculate the two unknown coefficients c0 and c1:
The left hand side can be converted into matrix form:
And we can once again solve for c and then use the coefficients in c alongside the equation to get our interpolated velocity:
import numpy as np import numpy.linalg as linalg a = np.array([[1, 15], [1, 20]]) b = np.array([362.78, 517.35])[:, np.newaxis] c = linalg.solve(a, b) v16lin = c[0] + c[1] * 16
This gives the linear interpolation for this datapoint as 393.694 m/s.
We could take the three nearest datapoints and construct a quadratic system of equations which has the form:
c0 + c1 * x + c2 * x**2 = y
import numpy as np import numpy.linalg as linalg a = np.array([[1, 10, 10**2], [1, 15, 15**2], [1, 20, 20**2]]) b = np.array([227.04, 362.78, 517.35])[:, np.newaxis] c = linalg.solve(a, b) v16quad = c[0] + c[1] * 16 + c[2] * 16**2
This gives the quadratic interpolation for this datapoint as 392.1876 m/s.
We could take the four nearest datapoints and construct a cubic system of equations which has the form:
c0 + c1 * x + c2 * x**2 + c3 * x**3 = y
import numpy as np import numpy.linalg as linalg a = np.array([[1, 10, 10**2, 10**3], [1, 15, 15**2, 15**3], [1, 20, 20**2, 20**3], [1, 22.5, 22.5**2, 22.5**3]]) b = np.array([227.04, 362.78, 517.35, 602.97])[:, np.newaxis] c = linalg.solve(a, b) v16cubic = c[0] + c[1] * 16 + c[2] * 16**2 + c[3] * 16**3
This gives the cubic interpolation for the datapoint as 392.057 m/s.
The interp function in the numpy library can be used to perform limited linear interpolation. SciPy offers much more flexibility in its interpolate module.
This data is better visualised so I will add a basic plot below.
import numpy as np import scipy.interpolate as interpolate t = np.array([0, 10, 15, 20, 22.5, 30]) v = np.array([0, 227.4, 362.78, 517.35, 602.97, 901.67]) f1 = interpolate.interp1d(t, v, kind="linear") t1 = np.arange(0, 31, 1) v1 = f1(t1) import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.scatter(t, v) plt.plot(t1, v1, color="r")
Note that output f1 is itself a polynomial function. We apply this function to new t2 data to evaluate the polynomial at each time point returning an array of interpolated new v2 datapoints:
import numpy as np import scipy.interpolate as interpolate t = np.array([0, 10, 15, 20, 22.5, 30]) v = np.array([0, 227.4, 362.78, 517.35, 602.97, 901.67]) f1 = interpolate.interp1d(t, v, kind="cubic") t2 = np.arange(0, 31, 1) v2 = f1(t2) import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.scatter(t, v) plt.plot(t2, v2, color="r")
polynomial fitting
numpy has two functions polyfit and polyval for polynomial fitting however these are obsolete and there is more functionality in using the Polynomial class from the polynomial module (which is available in both numpy and scipy). This creates an instance of the Interpolation class that has a polynomial equation of deg specified by the keyword input argument agreement using the supplied data t and v. This instance can be printed to get the equation as a str or the linspace function can be called from it to get a number of equally spaced datapoints from the limits of the interpolated data:
import numpy as np from numpy.polynomial import Polynomial t = np.array([0, 10, 15, 20, 22.5, 30]) v = np.array([0, 227.4, 362.78, 517.35, 602.97, 901.67]) f1 = Polynomial.fit(t, v, deg=1) data1 = f1.linspace(31) f2 = Polynomial.fit(t, v, deg=2) data2 = f2.linspace(31) f3 = Polynomial.fit(t, v, deg=3) data3 = f3.linspace(31) import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.figure(1) plt.scatter(t, v) plt.plot(data1[0], data1[1], color="r") plt.title(str(f1.convert())) plt.figure(2) plt.scatter(t, v) plt.plot(data2[0], data2[1], color="r") plt.title(str(f2.convert())) plt.figure(3) plt.scatter(t, v) plt.plot(data3[0], data3[1], color="r") plt.title(str(f3.convert()))
trigonometric functions
numpy has a number of trigonometric functions inbuilt such as sin, cos and tan. The linspace function can be used to create linearly spaced values between -2 pi and 2 pi. Then these functions can be used on a numpy array of time to give sin, cos and tan arrays with respect to these times.
import numpy as np t = np.linspace(-2*np.pi, 2*np.pi, 100) y1 = np.sin(t) y2 = np.cos(t) y3 = np.tan(t) import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.figure(1) plt.plot(t, y1) plt.plot(t, y2) plt.figure(2) plt.plot(t, y3)
I will cover plotting in a different guide however here you can see the sin waveform with the cos waveform which essentially has a pi/4 lag.
The tan waveform is the ratio of the sin waveform over the cos waveform. At pi/2 sin(pi/2) = 1 and cos (pi/2) = 0 which gives 1/0 which is infinite making this function discontinuous. We don't see this as inf on our chart as we are just sampling a small number of datapoints and aren't exactly at pi/2.
There are other functions such as arcsin, arccos, arctan, arcsinh, arccosh and arctanh.
The exp and log functions interact with an array element by element in the same manner. Recall that the exp and log are the inverse of one another:
import numpy as np t = np.linspace(1, 10, 100) y1 = np.exp(t) y2 = np.log(t) y3 = np.log(np.exp(t)) import matplotlib.pyplot as plt import seaborn as sns sns.set_style("whitegrid") plt.figure(1) plt.plot(t, y1) plt.figure(2) plt.plot(t, y2) plt.figure(3) plt.plot(t, y3)
The function np.log10 performs the opposite function of e when used in scientific notation:
import numpy as np nums = np.logspace(4, 1, 4) powers = np.log10(nums)
random number generation
The random module has 3 main functions for random number generation, rand which returns a float between 0 and 1, randn which returns a float gaussian distributed around 0 (positive or negative) and randint which generates an int. For rand and randn, the desired number array size can be unpacked as a tuple.
For randint there are additional keyword input arguments and we need to specify a start and end int (recall we are inclusive of the lower bound and exclusive on the upper bound and the step for this function is always 1). We can specify the desired size of the array using the keyword input argument size and assigning to a tuple of the desired dimensions.
We also have the choice function that can used to randomly select a value or values from a list or a numpy array. The choice function has an additional keyword input size which can be used to create an output array of choices. Note the choice function can select the same choice multiple times.
import numpy as np import numpy.random as random np.random.seed(25) m1_r = random.rand(*(2, 3)) m1_rn = random.randn(*(2, 3)) m1_rint = random.randint(low=0, high=10, size=(2, 3)) nums = np.array([1, 2, 3]) two_nums = random.choice(nums, (2,))
meshgrid
The meshgrid function is usually used for plotting data that has a row of x values, column of y values and z in matrix form like shown:
import numpy as np import matplotlib.pyplot as plt x = np.array([5, 6, 7, 8, 9])[np.newaxis, :] y = np.array([1, 2, 3, 4])[:, np.newaxis] z = np.array([[0, 1, 1, 1, 0], [1, 2, 3, 2, 1], [1, 2, 3, 2, 1], [0, 1, 1, 1, 0]])
The meshgrid can be used to create a tuple of x and y values that have the same form of z:
(x, y) = np.meshgrid(x, y)
These can be flattened and placed side by side in an output matrix that shows the x, y and z columns respectively:
xyz = np.zeros((z.size,3)) xyz[:,0] = x2.flatten() xyz[:,1] = y2.flatten() xyz[:,2] = z.flatten()