Working with Pandas dataframes with IBM TM1 and Planning Analytics

Pandas is probably the most important Python librairy that you should be aware of if you want to write Python scripts on top of your IBM TM1 and Planning Analytics data.

IBM TM1 and Planning Analytics is well known for its very powerful calculation engine. It comes with a lot of built-in functions. For the calculations missing, you could choose to write yourself the algorithm in TM1 rules or TM1 processes but a more efficient way to do this would be to ask Python to do these calculations for you. In the Python eco-system you will find thousands of libraries and it is very likely that the calculation you are looking is already available in one of these library.

The calculation we want to do in this article are called rolling/moving median and standard deviation, these calculations are available in Pandas. We will proceed in three steps:

  1. Get data out of TM1 in a Pandas dataframe

  2. Do the calculation using Pandas

  3. Send it back to our TM1 cube

If TM1py is installed in your environment, you do not need to install Pandas as Pandas is dependency of TM1py so it will get installed when you install TM1py.

 

Step 1: Getting data out of TM1 as a dataframe

To get data out of TM1, we use TM1py. TM1py handles the authentication with TM1 and provides us with a lot of useful functions to read data from TM1.

To access TM1py functions, we need first to create an object called tm1 containing all TM1Service properties:

# Create the TM1 service
from TM1py.Services import TM1Service

tm1 = TM1Service(address='localhost', port=8009, user='admin', password='apple', ssl=True)

In this example we are getting data for three city (NYC, Chicago and Washington) between two dates (2014-01-01 to 2017-12-31) for Actual and the measure Count using the following MDX statement:

# This mdx will be used as data in this example
mdx = """
SELECT 
NON EMPTY {[Date].[2014-01-01]:[Date].[2017-12-31]} ON ROWS,
{[City].[NYC], [City].[Chicago], [City].[Washington]} ON COLUMNS
FROM [Bike Shares]
WHERE ([Bike Shares Measure].[Count], [Version].[Actual])
"""

There are different way to convert the data coming from TM1 into a dataframe, we could first get the data using tm1.cubes.cells.execute_mdx and then build the dataframe using build_pandas_dataframe_from_cellset:

# retrieve a dictionary through TM1py and build a DataFrame from it
from TM1py.Utils.Utils import build_pandas_dataframe_from_cellset

data = tm1.cubes.cells.execute_mdx(mdx)

df = build_pandas_dataframe_from_cellset(data, multiindex=False)

To show the first 5 records we use df.head():

As the Version and Bike Shares Measure are fixed, it would be cleaner to remove them from this dataframe and only keep the Date, City and Value.

To get the values with only the dimension on rows and columns we can use the function tm1.cubes.cells.execute_mdx_dataframe:

# retrieve DataFrame directly
df = tm1.cubes.cells.execute_mdx_dataframe(mdx)
df.head()

Now we need to transform this dataframe to have one date per row with the value for each city on columns. The for loop below will do just that:

# Transform into something more convenient
for city in ('NYC', 'Chicago', 'Washington'):
    df[city] = df.apply(lambda row: row["Value"] if row["City"] == city else None, axis=1)
df.drop(columns=["Value"], inplace=True)
df = df.groupby("Date").sum()
df.head()

To avoid having to transform our dataframe (the for loop above) into a pivot we could use the TM1py function execute_mdx_dataframe_pivot:

# retrieve a pivot DataFrame from a cubeview / MDX Query 
pivot = tm1.cubes.cells.execute_mdx_dataframe_pivot(mdx)

With one line of code, this function will give you the same result as above.

 

Step 2: Calculate the rolling median and deviation

All right so now we have a Pandas dataframe called df so we can leverage all Pandas properties such as:

  • df.tail() to get the last 5 records

  • df.sample(n) to get n random records

  • df.loc['2016-08-11']['NYC'] to access one cell

  • pivot.loc[("2017-12-31")] to access all cells for one date

  • df.sort_values(by=['NYC'], ascending=True) to sort

  • df.loc[lambda df: df['NYC'] > 74000] filter

  • df.head().T Transpose

A very powerful property is df.describe() to get a summary of the data:

 

To calculate the rolling median and rolling standard deviation, we will use first the property rolling to define the number of records we want to calculate the median and then we called .median() to calculate it. To create a new column we use df[‘NYC Rolling Median’]. The following code will create a new column with the rolling median of NYC:

df['NYC Rolling Median'] = df['NYC'].rolling(window=10,).median()

To create a new column with the standard deviation for NYC we use the code below:

df['NYC Rolling StdDev'] = df['NYC'].rolling(window=10,).std()

We can see now the two new columns in our dataframe:

 

Step 3: Write back to TM1

To write back our two new columns into TM1, we define first a new dataframe containing only these two new columns.

df_writeback = df[['NYC Rolling Median', 'NYC Rolling StdDev']]

To replace NA values by 0 we use the fillna(0) function:

df_writeback = df_writeback.fillna(0)

The cell set that we send back to TM1 has to contain one element per dimension so we just need to add the version Actual, the city NYC and the two new measures:

# Declare the cellset
cellset = {}
# Prepare the cellset
for date in df_writeback.index:
    cellset[('Actual', date, 'NYC', 'Rolling Median')] = median = df_writeback.loc[date]['NYC Rolling Median']
    cellset[('Actual', date, 'NYC', 'Rolling StdDev')] = stddev = df_writeback.loc[date]['NYC Rolling StdDev']

Once our cellset is ready we just need to call the TM1py function cubes.cells.write_values with the cube name and the cellset as parameters:

tm1.cubes.cells.write_values("Bike Shares", cellset)

If you get the response 204, it means that the data has been updated into your TM1 cube.

You can run this example in your environment using the TM1py samples called pandas_in_depth from the Samples folder:

The easiest way to test it is to download the TM1py Samples repository.

Working with Python lists and dictionaries in a TM1 context

Lists and dictionaries are probably the most important data structures that you should be aware off when writing Python scripts for your IBM TM1 and Planning Analytics application.

To learn more about all different data structures available in Python, there is already lots of content online such as https://python.swaroopch.com/data_structures.html.

In this article we will focus on lists and dictionaries and then how to use them in a TM1 context.

Lists: an ordered collection of items

A list is a data structure that holds an ordered collection of items i.e. you can store a sequence of items in a list:

fruits = ['apple', 'banana', 'kiwi', 'cherry']
print(fruits)

A list in Python has lots of useful features such as:

Function Example Definition
append fruits.append('dragonfruit') Adding a new item
+ healthy_stuff = fruits + vegetables Concatenating two lists
len() len(healthy_stuff) Getting the number of items

It is also very easy to loop through the list:

for element in healthy_stuff:
    print(element)

In Python when you want to create a new list from an existing one it can done easily through List Comprehension. So you could execute the upper function on every item and create a new List from the result. The code almost reads like plain English:

Lists in a TM1 context

The Python lists can be used to manipulate elements. For example let’s create a new list called element_names containing the list of all elements from the hierarchy City of the dimension City:

with TM1Service(address=ADDRESS, port=PORT, user=USER, password=PASSWORD, ssl=SSL, decode_b64=DECODE_B64) as tm1:
    element_names = list(tm1.dimensions.hierarchies.elements.get_element_names(
        dimension_name='City', 
        hierarchy_name="City"))

Once our elements list has been created, we can now leverage the lists properties such as:

Function Example Definition
len() len(element_names) Getting the number of elements
in 'NYC' in element_names Checking if an element exists

Dictionaries

A dictionary is like an address-book where you can find the address or contact details of a person by knowing only his/her name i.e. we associate keys (name) with values (details). The key of each object must be unique.

food_rating = {'Sushi':4, 'Ice Cream':3, 'Bacon':3.5}
print(food_rating)

Once your dictionaries has been created you get access to all dictionaries features:

Function Example Definition
[] food_rating["Xiao Long Bao"] = 5 Adding a new element with 5 as value
get(object) food_rating.get("Bacon") Getting the value of an object
sum()/len() sum(food_rating.values()) / len(food_rating) Calculating average
del del food_rating['Ice Cream'] Removing one object

Dictionaries in TM1 context

The Python dictionnaries can be used to manipulate TM1 data . Let’s create a new dictionary containing the data from an MDX statement:

mdx = """
SELECT
{[Date].[2019-12-20]:[Date].[2019-12-31]} * {[City].[NYC], [City].[Chicago]} ON ROWS,
{[Public Holidays Measure].[Public Holiday]} ON COLUMNS
FROM [Public Holidays]
WHERE ([City].[NYC])
"""

To create a dictionary containing the cells intersection and the values we use the TM1py function tm1.cubes.cells.execute_mdx_rows_and_values:

with TM1Service(address=ADDRESS, port=PORT, user=USER, password=PASSWORD, ssl=SSL, decode_b64=DECODE_B64) as tm1:
    holidays = tm1.cubes.cells.execute_mdx_rows_and_values(
        mdx=mdx, 
        element_unique_names=False)
print(holidays)

The TM1py dictionary created, will contained the cell coordinates as keys and the values as values. We can then easily loop print each intersections and values:

for row_elements, cells in holidays.items():
        print(*row_elements, *cells)

To get a specific value from this dictionary, you need to specify all coordinates, in this example there are only two dimensions in this cube:

  • holidays[('2019-12-25', 'Chicago')] returns 1

  • holidays[('2019-12-25', 'CHI CAGO')] returns 1

It can be noted that ‘Chicago’ or ‘CHI CAGO’ will return the same value because the dictionary created by TM1py is case and space insensitive.

More examples are available in the data_structures_in_python.ipynb Jupyter Notebook available in the TM1py samples Data folder:

The easiest way to test it is to download the all TM1py Samples repository.

Getting Data from TM1 with Python

TM1py makes all the goodness of the Python ecosystem available to your IBM TM1 and Planning analytics cubes data. Python is very popular for Data Science and for a lot of other stuff. To use a Python library with your cubes data, you will need to get your data out of your cubes, do the magic and send it back to TM1. In this article we focus on the first step, getting data out of TM1.

There are different ways to achieve this, over time the TM1py community made available lots of functions to cater for different scenario (getting the data into a Pandas frame, into a CSV format…). All these functions are available in the cubes.cells object.

In your Python code, you need first to declare a TM1 service object as below:

#Connect to the TM1 instance
tm1 = TM1Service(address=ADDRESS, port=PORT, user=USER, password=PWD, ssl=SSL)

The functions to get data starts with execute_ , to see them you will need to type first tm1.cubes.cells. and then if you are using Jupyter Notebook or Pycharm, pressing the TAB key will show the list of available functions:

There are two ways to get data from a TM1 cube, either from an existing cube view or from a MDX statement:

  • execute_mdx…: getting data from a MDX query

  • execute_view…: getting data from a cube view

Getting values with coordinates

In the example below we use execute_view to get the data from a cube view with the cells coordinates and values:

private=False means that the view is public. If it was a private view, we had to use private=True

Getting data as CSV format

execute_view_csv will retrieve the value as a CSV format, this is very useful if you want to extract data from a cube view into a csv file:

Getting data as a Pandas dataframe

execute_view_dataframe will retrieve the data as a Pandas dataframe. Once your values are stored into a Pandas dataframe, you can start using all popular Pandas functions, this is very common in the Data Science world:

With a Pandas dataframe, you can export cube data into a csv file with two simple lines of code:

# Get the data
df = tm1.cubes.cells.execute_view_dataframe(cube_name=cube_name, view_name=view_name, private=False)
# Send data into csv
df.to_csv("data.csv")

Getting data from MDX

All functions described above exists as well for MDX. For example to get data from a MDX, we use execute_mdx function:

More examples are available in the reading_data Jupyter Notebook from the TM1py samples data folder:

The easiest way to test it is to download the all TM1py Samples repository.

Installing TM1py

This article explains the steps to install TM1py. Installing TM1py will take you just few minutes.

Prerequisites

  • TM1 (10.2.2 Fix Pack 5 or higher)

Checking if Python is installed

TM1py is a Python package, in order to use TM1py you need to install Python. First, we need to check if you have python installed. Open the command-line and type:

  • python --version.

To open the command-line, press the windows button on the keyboard. Then type 'cmd'. Then press enter.

If you are getting the message above, it means that Python is not installed. If Python is already installed you can just use pip install TM1py to install the TM1py package.

Installing Python or Anaconda

To install Python you can choose to either install the official Python platform or another Python platform such as Anaconda. We recommend installing Anaconda as it contains Python + a lot of very useful libraries such as Jupyter Notebook:

To get the latest Anaconda version click the following link:

Choose the correct distribution:

Once downloaded just execute the installer, you can keep the default settings and click next until the end of the installer (The installation should take less than 10 minutes):

Installing TM1py

To install TM1py with Anaconda, just open the Anaconda prompt:

and then type the following command line:

  • pip install TM1py

Check connectivity with your TM1 instance

Once TM1py installed, you need to check the connectivity with your IBM TM1 or Planning Analytics instances, to do that just follow the steps described in the following article:

CubeCalc functions

To calculate all these functions, CubeCalc leverages the Python libraries Numpy, functools and scipy. This article lists all functions available in CubeCalc:

Name Function Parameters
effect manual effect(nominal_rate, npery)
fv numpy.fv fv(rate, nper, pmt, pv, when=0)
fv_schedule functools.reduce functools.reduce(lambda x, y: x + (x * y), values, float(principal))
irr numpy.irr irr(values)
mirr numpy.mirr mirr(values, finance_rate, reinvest_rate)
mirr numpy.nper nper(rate, pmt, pv, fv=0, when=0)
nominal manual nominal(effect_rate, npery)
npv numpy.npv npv(rate, values)
_nroot _nroot(value, n) nominal(effect_rate, npery)
pmt numpy.pmt ppmt(rate, per, nper, pv, fv=0, when=0)
pv numpy.pv pv(rate, nper, pmt, fv, when=0)
rate numpy.rate rate(nper, pmt, pv, fv=0, when=0, guess=0.1, maxiter=100)
stdev_p numpy.std numpy.std(values)
stdev_p numpy.std numpy.std(values, ddof=1)
sln manual sln(cost, salvage, life)
xirr scipy.optimise.newtown optimize.newton(lambda r: xnpv(r, values, dates), float(guess))
xnpv manual xnpv(rate, values, dates)

Getting Started with CubeCalc

This article will explain you how to set up CubeCalc, a super calculator for IBM TM1 and Planning Analytics

CubeCalc enables you to execute complex finance calculations which are not available natively in IBM TM1 and Planning Analytics such as IRR (Internal Rate of Return) and NPV (Net Present Value).

CubeCalc installation steps

Prerequisites:

  • TM1 10.2 FP5 as minimum

  • TM1py v1.0.3 as minimum

1. Install Python

CubeCalc is a Python script, to run it, you will first need to install Python. Steps to install Python can be found here.

2. Install TM1py

CubeCalc uses the free Python package TM1py, so the next step is to install the TM1py package. Open a command prompt and then type:

pip install TM1py

3. Download CubeCalc folder

CubeCalc is an open-source Python script which you can download from its github page:

In this example, the CubeCalc folder has been copied to the C:\TM1py\cubecalc folder:

4. Update config.ini file

In the config file example, you will find two instances but you can add as many as you want, to add a TM1 instance just add the following information:

  • [instanceName]

  • address: IP address of the physical server where the TM1 instance is located

  • port: HttpPortNumber of the instance (tm1s.cfg)

  • namespace: (only required for CAM Security)

  • user: TM1 user

  • password: TM1 user password

  • ssl: True/False (depending useSSL setting in tm1s.cfg)

To check the connectivity with your TM1 instance, you can follow the steps described here:

Execute your first CubeCalc calculation

Setup samples TM1 cubes

The following examples used TM1 objects which can be created using the “setup sample.py” script located in the cubecalc folder.

To execute the script, open a command prompt, go to the cubecalc folder using cd C:\TM1py\cubecalc and then to execute the script use the command python "setup sample.py". If the scrip ran successfully, the prompt should go to the next line:

In your TM1 instance, you should now see two new cubes (Py Project Planning and Py Project Summary):

Example: Calculate IRR

The internal rate of return (IRR) is a metric used in capital budgeting to estimate the profitability of potential investments. This is a pretty common function in Excel, now thanks to CubeCalc, with one command line, you will be able to calculate the IRR in IBM TM1 and Planning Analytics.

In this example, we want to calculate the IRR for the Project 1 for the next 12 quarters. The first view on the left is our data and the view on the right is the cell where CubeCalc will send the result of the IRR:

In this example we are using cubeview as data source so in our command line the paramters: cube_source "Py Project Planning", view_source "Project1" and cube_target "Py Project Summary" and view_target "Project1 IRR" the command line is as below:

python CubeCalc.py --method "IRR" --tm1_source "tm1srv01" --tm1_target "tm1srv01" --cube_source "Py Project Planning" --cube_target "Py Project Summary" --view_source "Project1" --view_target "Project1 IRR" --tidy False

The tidy parameter enables you to delete the source and target views after they have been used. To keep the views, set the value to False, to delete the views set it to True.

Copy the above command line and paste it in the command promtp and press enter:

If you don’t get any messages, it means the CubeCalc ran successfully, you just need to recalculate the view Project1 IRR to see the result 0.0126231328:

If you compare in Excel, you will get the same value:

Run CubeCalc from a TM1 process

To run CubeCalc from a TM1 process, you just need to use the ExecuteCommand function either in the Prolog or Epilog tab:

sCommand = 'python C:\TM1py\cubecalc\CubeCalc.py --method "IRR" --tm1_source "tm1srv01" --tm1_target "tm1srv01" --cube_source "Py Project Planning" --cube_target "Py Project Summary" --view_source "Project1" --view_target "Project1 IRR" --tidy True';

ExecuteCommand(sCommand,1);

The second parameter is either 1 if you want to wait for the command to finish or 0 to termiate the process straight away.

Troubleshoot CubeCalc

If CubeCalc returns an error, the best place to start looking at, is the log file. CubeCalc will create a CubeCalc.log file in the same folder location as the CubeCalc.py.

The most common errors are the following:

  1. CubeCalc can't connect to TM1 instance so you should check first the settings in the config.ini (Check connectivity with TM1).

  2. TM1py version should be v1.0.3 as minimum (Upgrade TM1py).

Installing TM1py on a server with no internet connection

This article describes the step to install TM1py on a server with no internet connection.

1 - Download Python or Anaconda

TM1py is a Python package, it requires either Python or Anaconda (A platform to manage your Python packageds) to be installed first:

2 - Install Python or Anaconda

Once downloaded copy the installer into your server and follow the installer steps. In this article we chose to install Anaconda.

3 - Check if Anaconda is installed correctly

Open the Anaconda command promtp and type:

  • where Python

It will return the path where Python is installed.

4 - Install TM1py

All TM1py version can be found in the GitHub repository:

Download the source code:

Unzip it and copy the TM1py folder:

Then you paste the TM1py folder into C:\ProgramData\Anaconda3\Lib\site-packages (or wherever the install-directory of python / Anaconda is located on that machine).

Then in the TM1py directory inside the lib\site-packages folder it should look exactly as in the lib\site-packages\TM1py folder on your local machine, where you installed TM1py through pip.

Run processes in parallel using only one TM1 connection

rushti-icon.png

Being able to run processes in parallel is one of the most powerful features of IBM TM1 and Planning Analytcs. Instead of loading one year of data with one process, you could run in parallel one process per month which will significantly speed up your loading time!

Run processes in parallel with TM1RunTI

Running processes in parallel is quite common now in IBM TM1 and Planning Analytics applications. The most common way to do it is to use TM1RunTI.

TM1RunTI is an exe file which is located in the TM1 bin installation folder. With a simple command line as below you can run multiple processes in parallel:

"C:\Program Files\Cognos\TM1\bin\tm1runti.exe" -process Cub.Flight -adminhost CW111 -server flightstats -user TI01 -pwd "" pYear=2000 pMonth=01
"C:\Program Files\Cognos\TM1\bin\tm1runti.exe" -process Cub.Flight -adminhost CW111 -server flightstats -user TI02 -pwd "" pYear=2000 pMonth=02
"C:\Program Files\Cognos\TM1\bin\tm1runti.exe" -process Cub.Flight -adminhost CW111 -server flightstats -user TI03 -pwd "" pYear=2000 pMonth=03

Manage running processes with Hustle

Hustle was created to manage the threads and making sure that you do not exceed the number of available threads on the server. Hustle uses TM1RunTI, therefore it has the same limitation, for each thread, it will first need to connect to TM1 to be able to execute it.

In certain applications, TM1 developers had to set up one different user per thread to avoid locking.

Run processes in parallel with only one connection

To avoid this situation RushTI was created:

RushTI does the same as Hustle except that it connects to TM1 only once at the beginning and then reuses the same connection to run all threads.

This article will explain you how to install and use RushTI.

Prerequisites:

  • TM1 10.2 FP5 as minimum

  • TM1py v1.0.3 as minimum

1. Install Python

RushTI is a Python script, to run it, you will first need to install Python. Steps to install Python can be found here.

2. Install TM1py

RushTI uses the free Python package TM1py, so the next step is to install the TM1py package. Open a command prompt and then type:

pip install TM1py

3. Download RushTI folder

RushTI is an open-source Python script which you can download from its github page:

In this example, the RushTI folder has been copied to the C:\RushTI folder:

4. Update config file

In the config file example, you will find two instances but you can add as many as you want, to add a TM1 instance just add the following information:

  • [instanceName]

  • address: IP address of the physical server where the TM1 instance is located

  • port: HttpPortNumber of the instance (tm1s.cfg)

  • namespace: (only required for CAM Security)

  • user: TM1 user

  • password: TM1 user password (encoded in Base 64)

  • ssl: True/False (depending useSSL setting in tm1s.cfg)

WARNING: The password needs to be encoded in Base 64.

5. Update tasks.txt

The file tasks.txt contains the list of all processes which are going to be executed by RushTI. In this example, the process Cub.GeneralLedger.LoadFromFile will be executed for every months:

instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=01
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=02
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=03
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=04
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=05
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=06
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=07
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=08
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=09
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=10
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=11
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=12

It can be noted that in the same list, you can run processes from different TM1 instances by changing the instance parameter:

instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=01
instance="tm1srv02" process="Cub.GeneralLedger.LoadFromFile" pMonth=01

6. Run RushTI from command line

To run RushTI as a command line, just use the following command (the last parameter is the number of threads RushTI will run at the same time):

python RushTI.py Tasks.txt 4

If you are not in the RushTI folder, you will have to add the path to the script:

python C:\RushTI\RushTI.py C:\RushTI\Tasks.txt 4

The following video shows the Pulse Live Monitor with 4 threads running at the same time:

Run RushTI from a TM1 process

To run RushTI from a TM1 process, you just need to use the ExecuteCommand function either in the Prolog or Epilog tab:

sCommand = 'python C:\RushTI\RushTI.py C:\RushTI\Tasks.txt 4';

ExecuteCommand(sCommand,1);

The second parameter is either 1 if you want to wait for the command to finish or 0 to termiate the process straight away.

Troubleshoot RushTI

If the threads are not running, the best place to start looking at, is the log file. RushTI will create a RushTI.log file in the same folder location as the RushTI.py.

The most common errors are the following:

  1. RushTI can't connect to TM1 instance so you should check first the settings in the config.ini (Check connectivity with TM1).

  2. TM1py version should be v1.0.3 as minimum (Upgrade TM1py).

  3. Check parameters of the TM1 processes.

Setup Cubike example

Cubike is a fictional Bike Sharing company that we use the series of articless about Data Science with TM1 and Planning Analytics:

If you are new to TM1py, this article will guide you to setup TM1py and all the different modules required in the Cubike example.

Prerequisites:

The only prerequisite is to have IBM TM1 10.2.2 FP5 as minimum already installed. For this example we are going to work with the TM1 instance called tm1srv01:

 

1. Install Python

TM1py is a python package. To install TM1py, we need first to install Python. The steps to install Python can be found here:

Once Python has been installed, to check if the installation was successful. Open a command line as administrator:

and then type the following command to get the Python version:

Python --version

In this example the Python version is v3.6.2:

 

2. Install TM1py

First you need to navigate to the Scripts folder where Python has been installed. To find out the Python location, in a Windows command promt type:

where Python

It should return you the location:

To go to the Python location just type cd and then the folder location:

cd "C:\Python36\Scripts"

To install TM1py you just need the following command line:

pip install TM1py

To check if the installation was successful just type:

pip show TM1py

It is going to show you information about TM1py and the version number, in this example v1.0.2:

 

3. Enabling TM1 REST API

TM1py uses the TM1 REST API to connect to the TM1 instance. In this example, our tm1srv01 instance uses the port 17001 for the REST API. To find out which port your TM1 instance is using, open the tm1s.cfg file and check the HttpPortNumber value.

To enable the REST API, open a web browser and paste the following URL:

  • https://localhost:17001/api/v1/$metadata 

You should be prompted this warning message, click Advanced and then Proceed to localhost (unsafe):

You should then see the metadata information of your TM1 instance:

 

4. Test connection with TM1

Once the TM1 REST API has been enabled, the next step is to check if you have the right TM1 settings to connect to your TM1 instance. Just follow steps described in the following article:

After running the check.py script from the TM1py samples, TM1py should be able to find your TM1 instance name:

 

5. Download Cubike example

All the files you need to go through the Cubike examples are gathered in one zip file which you can download here:

After unzipping this file, you should see two Jupyter notebooks, four csv files and one python script:

  • chicago.csv: Bike Sharing data from Chicago.+
  • Data Integration with TM1py.ipynb: Jupyter Notebook to load data into TM1 from a web service.
  • Data Science with TM1py.ipynb: Jupyter Notebook to analyse the data.
  • nyc.csv: Bike Sharing data from New Yord City.
  • public holidays.csv: Public Holidays data.
  • washington_dc.csv: Bike Sharing data from Washington DC.
  • setup.py: Python script to create TM1 objects.
 

6. Install Jupyter Notebook

For demonstration and documentation purposes all the python code in this series is distributed through Jupyter Notebooks. Jupyter Notebooks allow you to create and share documents that contain live code, equations, visualizations and narrative text. A quick introduciton to Jupyter can be found here.

This example contains two Jupyter notebook (.ipynb). To open these two notebook, we have to install Jupyter first.

To install Jupyter, you just need to run the following command from the Scripts Python folder:

pip install jupyter
 

7. Open a Jupyter Notebook

To open Jupyter Notebook, you need to run Jupyter. To open a Windows command prompt, go to the folder location where you downloaded the Cubike example and then in the Windows Explorer bar, just type cmd and press enter:

To run Jupyter from this location, just type:

jupyter notebook

A new tab on your web browser should open and you should be able to see the two Jupyter Notebooks (.ipynb) file, just click on the first one to open it:

Once opened, you can then use the buttons at the top to navigate through the code's blocks. The most important button are the Up/Down arrow and the Run button which executes a code block and go to the next one:

 

Other Python packages required for Part 2 and Part 3 articles:

To run the Data Science with TM1py.ipynb, you will need to install in the Python folder, the following packages:

1) Ploty is a graphing library

pip install plotly

2) C++ Compiler is required by Facebook Prophet, you can download it from Build Tools for Visual Studio 2017 and run as administrator .exe file.

A restart of the server will be required.

3) Msgpack: (required by Facebook Prophet):

pip install msgpack-python

4) PyStan (required by Facebook Prophet)

pip install PyStan

5) Facebook Prophet is a forecasting tool.

pip install fbprophet

6) Matplotlib is a plotting library

python -mpip install -U matplotlib
 

Cubike Part1

You are now ready to start the Part 1 article:

 

Timeseries Forecasting with Facebook Prophet and TM1/Planning Analytics

Welcome to the last part of the articles series about Data Science with TM1/Planning Analytics and Python. In Part 1 we loaded weather data from the NOOA web service into our TM1 cubes. In Part 2, by analyzing the data with Pandas and Ploty, we've learned that 

  • There are strong seasonal trends throughout the year
  • Public Holidays and weekends have a negative impact on the bike shares
  • Temperature and Bike shares are strongly correlated in every city. 
  • The intensity with which temperature affects the bike shares varies by city. Washington DC is the city that is least affected by the weather.

Before going any further, we recommend you to read these two articles:

Objective

In this article, we are going to explain how to use Facebook's Prophet to create a two year demand forecast for bike sharing, based on four years of actuals from our TM1 cube.

Before we start with the implemenation let's quickly discuss what Prophet is.

Prophet

The idea behind the prophet package is to decompose a time series of data into the following three components:

  • Trends: these are non-periodic and systematic trends in the data,
  • Seasonal effects: these are modelled as daily or yearly periodicities in the data (optionally also hourly), and
  • Holidays / one-off effects: one-off effects for days like: Black Friday, Christmas, etc.

Based on our historic data, Prophet fits a model, where each of these components contribute additively to the observed time series. In other words, the number of bike shares on a given day is the sum of the trend component, the seasonal component and the one-off effects.


Step 1: Data Wranging

In the Part 2 of this series, we already loaded the actuals from the Bike Sharing cube into Python. We called the variable df_b.

Before we can use this data to fit our Prophet model, we must make sure we arrange the data in a correct format.

The dataframe that Prophet needs has two columns:

  • ds: dates
  • y: numeric values

We execute the following code to arrange our dataframe.

df_nyc = df_b[city].reset_index()
df_nyc.rename(columns={'Date': 'ds', city: 'y'}, inplace=True)
df_nyc.tail()

We use the tail function on our dataframe (df_nyc) to display the last 5 rows of data:


Step 2: Fitting the model

Now that we have the data ready, and a high level understanding of the seasonal trends in our data, we are ready to fit our model!

First we need to instantiate Prophet. We are passing two arguments to the constructor of the Prophet model:

  • The public holidays that we want Prophet to take into account
    (they come from a TM1 cube through MDX. More details in the Jupyter notebook)
  • Whether or not Prophet should model intraday seasonality
m = Prophet(holidays = holidays, daily_seasonality=False)

Now we can fit our model, by executing the fit method on our model and passing the dataframe, that we arranged in step 1.

m.fit(df_nyc);

This is where Prophet is actually doing all the hard work, the curve-fitting.
Under the hood Prophet uses Stan to run the statistical calculations as fast as possible.


Step 3: Use Facebook's Prophet to forecast the next 2 years

We can use the fitted Prophet model, to predict values for the future.

First we need to specify how many days we would like to forecast forward.
This code block creates a dataframe with the sized window of future dates.

future = m.make_future_dataframe(periods=365*2)

Then we use the predict function on our model. As the argument to that function, we pass the dataframe future.

forecast = m.predict(future)

    Done! 

    The forecast is ready. Let's look at what Prophet predicted !

    We select the following columns on the dataframe and print out the last 5 records::

    • ds (the date) 
    • yhat (the predicted value)
    • yhat_lower (the lower bound of the confidence interval)
    • yhat_upper (the upper bound of the confidence interval)

    The following code is going to print the last 5 records:

    forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()

    Step 4: Analysing the forecast

    We can interrogate the model a bit to understand what it is doing. The best way to do this is to see how the model fits existing data and what the forecast looks like. This is shown in the plot below.

    The black dots correspond to the historic number of bike shares each day (2014-2018). The dark blue line represents the estimated number of shares, projected with the fitted model.
    The light blue lines correspond to the 80% confidence interval for the models predictions.

    Judging visually, the model has done a good job of picking up the yearly seasonality and the overall trend. The forecast for 2019 and 2020 looks plausible!

    To get an even further understanding of our fitted model, we can plot each of the model components. This is shown in the plot below.

    In the top panel we see the linear growth term. This term contains changepoints (either determined independently by Prophet or preset by the user) so that the rate of growth is allowed to vary over time. The second panel shows the effect that public holidays have on our bike shares. The final two panels show the estimated yearly and weekly trends of the model:

      components_plot.png

      Conclusion on this analysis:

      • An overall global trend of growth from 2015, that slowed down slightly after 2016.
      • Public holidays lead to a fall in the usage of the bikes
      • A strong weekly seasonality: Our bikes are used mostly during the week – presumably for commuting.
      • A strong yearly seasonality with a peak in summer/ automn and a drop in winter.

      Step 5: The last step is to send the data back to TM1

      Before sending the data back to TM1, we need to rearrange the data so it matches the dimensions in our cube:

      • Version: Prophet Forecast
      • Date: date
      • City: city
      • Bike Shares Measures:
        • Count for yhat
        • Count Lower for yhat_lower
        • Count Upper for yhat_upper

      To rearrange the data for TM1 we execute the following code.

      cells = {}
      
      for index, row in forecast.iterrows():
          date = str(row['ds'])[0:10]
          cells['Prophet Forecast', date, city, 'Count'] = round(row['yhat'])
          cells['Prophet Forecast', date, city, 'Count Lower'] = round(row['yhat_lower'])       
          cells['Prophet Forecast', date, city, 'Count Upper'] = round(row['yhat_upper'])

      Once our data set is ready, we use the TM1py function tm1.cubes.cells.write_values to send the data to our cube Bike Shares:

      tm1.cubes.cells.write_values('Bike Shares', cells)

      Let's check in the cubeview, years from 2014 to 2017 are actual data and the forecast starts in 2018:

      See Prophet in action with Jupyter

      Explore your TM1/Planning Analytics data with Pandas and Ploty

      Welcome to the second part of the Data Science with TM1/Planning Analytics article. In the Part 1 article, we uploaded in our TM1 cubes the weather data from a web service. Now that we have all the data we need in TM1, we can start to analyse it. The Python community provides lots of tools which make data science easy and fun.

      Why using Python for Data Science?

      • Top-notch, free data analysis libraries.
      • Free (and good) libraries to access data from systems or web.
      • Lots of research in new disciplines like, Machine Learning (Tensorflow) or Natural Language Processing (NLTK).
      • TM1 (as the Information Warehouse) is the ideal data source for Data Science

      Before going through this article, you should read first, Part 1:

      Objective

      The objective of this article is to explore the impact of seasonal effects, weather and public holidays on our operative business. To do that we are going to follow these steps:

      1. Load and visualize monthly bike shares by city
      2. Explore seasonal and regional trends
      3. Analyze relationship between average temperatures and bike shares by day
      4. Analyze the impact of Non-working days vs. Working days.

      We created a Jupyter notebook to guide you through these steps. The notebook that we are going to use in this article is Data Science with TM1py.ipynb, you can find it in the cubike-sample.zip file which you can download in the Part 1 article.

      Prerequisites

      To run this example on your machine, you will have to make sure first that you have installed the following components:


      Step 1:  Load, visualize 2017 monthly bike shares by city

      The first step is to define the TM1 connection settings which you find at the top of the notebook:

      ADDRESS = "localhost"
      PORT = 8009
      USER = "admin"
      PWD = ""
      SSL = True

      Before we start with the analysis, we first need to bring data from TM1 into our notebook.

      We start with data from the view 2017 Counts by Month of the Bike Shares cube.
      To load data from the cube view into python we execute the following code:

      cube_name = 'Bike Shares'
      view_name = '2017 Counts by Month'
      
      data = tm1.cubes.cells.execute_view(cube_name=cube_name, view_name=view_name, private=False)

      Our cellset, given back by TM1, is stored in the variable data. We use a TM1py function to convert the cellset into a pandas dataframe, which we call df:

      df = Utils.build_pandas_dataframe_from_cellset(data, multiindex=False)

      We now need to rearrange the dataframe. We need to reduce the dimensionality of our data from 4 (Time, Version, City, Measure) to 2 (Time, City). This should make our life easier down the road.

      df['Values'] = df["Values"].replace(np.NaN, 0)
      for city in ('NYC', 'Chicago', 'Washington'):
          df[city] = df.apply(lambda row: row["Values"] if row["City"] == city else None, axis=1)
      df.drop(columns=["Values"], inplace=True)
      df = df.groupby("Date").sum()

      To show the rearranged dataframe, we can just type df into a jupyter cell and execute it.

      Let's plot a barchart from our dataframe, to explore and understand the monthly distributions throughout the cities, visually.

      Using the popular charting library Plotly, we can create an interactive barplot with just 7 lines of python code:

      cities = ('NYC', 'Chicago', 'Washington')
      
      # define Data for plot
      data = [go.Bar(x=df.index, y=df[city].values, name=city) for city in cities]
      
      # define Layout. stack vs. group !
      layout = go.Layout(
          barmode='stack',
          title="Bike Shares 2017"
      )
      
      # plot
      fig = go.Figure(data=data, layout=layout)
      py.iplot(fig)

      As you can see below the Ploty package creates an interactive bar chart, you can untick a city and zoom into a specific chart area:

      Step1 - Video

      Step 1 - Conclusion

      As expected, the seasons have a massive impact on our bike sharing business. In the warmer months we have substantially more usage than in colder months.

      Also interesting is that the seasons seem to impact cities differently. While the relation between Summer and Winter months in NYC and Washington DC is approximately 1/2, in Chicago it is close to 1/5!

      Let's dig deeper into the relationships between the cities and the Temperature!


      Step 2: Explore seasonal and regional trends

      In order to get a clearer picture, instead of just looking at monthly figures of 2017, we are going to bring in the daily data from 2014 to 2017.

      We execute the following code to create a new dataframe df_b based on the data coming from the view 2014 to 2017 Counts by Day:

      cube_name = 'Bike Shares'
      view_name = '2014 to 2017 Counts by Day'
      
      data = tm1.cubes.cells.execute_view(cube_name=cube_name, view_name=view_name, private=False)
      df_b = Utils.build_pandas_dataframe_from_cellset(data, multiindex=False)
      df_b['Values'] = df_b["Values"].replace(np.NaN, 0)
      
      # Rearrange content in DataFrame
      for city in ('NYC', 'Chicago', 'Washington'):
          df_b[city] = df_b.apply(lambda row: row["Values"] if row["City"] == city else None, axis=1)
      df_b.drop(columns=["Values"], inplace=True)
      df_b = df_b.groupby("Date").sum()

      Let's print 5 sample records from the dataframe using df_b.sample(5):

      Pandas dataframes come with very handy and easy to use tools for data analysis. To calculate the correlation between the different columns (cities) in our dataframe, we can just call the corr function on our dataframe. df_b.corr():

      As one would expect, the bike shares by day across our three cities are all strongly correlated.

      To analyze the relationship between the average temperature and the bike shares  by city, we need to query the daily historic average temperatures from our TM1 cube into python.

      We execute the following code to create a dataframe (we call it: df_w) based on the cubeview 2014 to 2017 Average by Day of the Weather Data cube:

      cube_name = 'Weather Data'
      view_name = '2014 to 2017 Average by Day'
      
      data = tm1.cubes.cells.execute_view(cube_name=cube_name, view_name=view_name, private=False)
      df_w = Utils.build_pandas_dataframe_from_cellset(data, multiindex=False)

      To calculate the correlation between the two dataframes (df_w and df_b) we can use the corrwith function

      df_b.corrwith(df_w)

      Step 2 - Video

      Step 2 - Conclusion

      Temperature and Bike shares are strongly correlated in every city. 
      For the forecasting (part 3 of this series) to be effective we will need a model that can take seasonal effects (e.g. temperature) into account.

      The intensity with which temperature affects the bike shares varies by city.
      For the forecasting we will need to create different models by city.


      Step 3: Analyze relationship between average temperature and bike shares by day

      Let's visualize the relationship between temperature and bike shares in a Scatterplot.
      From our two dataframes: df_w (average temperature by day) and df_b (bike shares per day) we can create a scatterplot in just a few lines of code:

      cities = ('NYC', 'Chicago', 'Washington')
      colors = ( 'rgba(222, 167, 14, 0.5)','rgba(31, 156, 157, 0.5)', 'rgba(181, 77, 52, 0.5)')
      
      # Scatterplot per city
      data = [go.Scatter(
          x = df_w[city].values,
          y = df_b[city].values,
          mode = 'markers',
          marker = dict(
              color = color
              ),
          text= df_w.index,
          name=city
          )for (city, color) in zip (cities, colors)]
      
      # Plot and embed in jupyter notebook!
      py.iplot(data)

      Step 3 - video

      Step 3 - Conclusion

      Analyzing the plot visually we make a few statements:

      • Among the three cities, the distribution in Chicago is the closest to a linear model .
        Judging visually, we could draw a neat line through that point cloud
         
      • For Washington DC we can recognize an interseting trend, that for temperatures of approx. 25 degrees and higher the bike count stagnates.
         
      • The distribution for NYC is less homogeneous. A simple linear model would not sufficiently explain the bike shares count.

      Let's quantify those finding and take a closer look at the how non-working days impact our operative business.


      Step 4: Analyze the impact of Non-working days vs. Working days.

      To analyze the impact of Public holidays and weekends, we will focus on one city at a time. Let's start with NYC.

      First we want to create a linear regression between the average temperatures and the bike shares for NYC. 

      To calculate the fitted line we use the linregress function from the popular Scipy stats module.

      Note that the function not only returns us the slope and intercept of the fitted line, but also three measures (R squared, P Value and the Standard Error), that quantify how well the fitted line matches the observations.

      x, y = df_w[city].values, df_b[city].values
      slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)

      Now we need to query Public Holidays and Weekends from TM1 through two small MDX Queries, and merge them into a list. This list we call non_work_days.

      mdx = "{ FILTER ( { TM1SubsetAll([Date]) }, [Public Holidays].([City].[NYC]) = 1) }"
      public_holidays = tm1.dimensions.execute_mdx("Date", mdx)
      
      mdx = "{FILTER( {TM1SUBSETALL( [Date] )}, [Date].[Weekday] > '5')}"
      weekends = tm1.dimensions.execute_mdx("Date", mdx)
      
      non_work_days = public_holidays + weekends

      Now we can create a new scatterplot, that includes the fitted line (orange), the working days (blue) and the non-working days (green).

      When we repeat this exercise for Chicago and Washington DC we see a similar picture:
      The fitted line matches the points more (Chicago, Washington DC) or less (NYC) good and the majority of the green points lay underneath the fitted line.

      Step 4 - video

      Step 4 - Conclusion

      In all three cities the vast majority of the green points lay under the fitted line.
      On Non-working days there is generally less usage than our fitted linear regression model (Temperature ~ Bike Count) predicts.

      For the forecasting (part 3 of this series) to be effective we will need to take weekdays and public holidays into account.


      Next step: Timeseries forecasting

      The last part of this series is about timeseries forecasting with TM1py. We are going to use Prophet to forecast the bike shares in the future:

      Upload weather data from web services into TM1/Planning Analytics

      Python is a widely-used general-purpose programming language that lets you work quickly and integrate systems more effectively. TM1py enables you for the first time to get easily data from a multitude of systems and web services such as Trello, Salesforce, financial web services and many more... into your TM1/Planning Analytics application.

        This help article is the first one on the series of articles focus around a fictional Bike Sharing company called Cubike operating in New York City, Washington D.C and Chicago.
        The data for the series is real historic data provided from Bike Sharing companies in North America: DivvyCapital Bikeshare and Citi Bike.

        Objective

        The objective of these articles is to explore and understand the impact of the weather and public holidays on our operative (bike sharing) business and subsequently use modern timeseries forecasting techniques to generate a rough two year forecast.
        In this first article we will setup our working environment and upload weather data from a web service:

        1. Execute a python script that builds all the required TM1 objects for the series
        2. Write a python script to load daily historic weather data (Min, Max and Average by city) from a web service into a TM1 cube

        If you are new to TM1py, we recommend you to start with the following article which is going to guide you in the installation of Python, TM1py and Jupiter:

        Prerequisites

        In order to run this sample you will need to install on your machine:

        • IBM TM1 10.2 FP5 as minimum or Planning Analytics.
        • Python, an extremely useful programming language for everything that is related to data.
        • Pandas, a library which provides data structures and data analysis tools.
        • TM1py, the python package for TM1.
        • Jupyter (How to setup jupyter notebook).
         

        1 - Download the Cubike sample content

        We gathered all the files you need to run this example in the following zip file: 

        After downloading the zip file from above, unzip this file in any folder, you should see two Jupyter notebooks (.ipynb), four csv files and one python script:

        • chicago.csv: Bike Sharing data from Chicago.
        • Data Integration with TM1py.ipynb: Jupyter Notebook to load data into TM1 from a web service.
        • Data Science with TM1py.ipynb: Jupyter Notebook to analyse the data.
        • nyc.csv: Bike Sharing data from New Yord City.
        • public holidays.csv: Public Holidays data.
        • washington_dc.csv: Bike Sharing data from Washington DC.
        • setup.py: Python script to create TM1 objects.
         

        2 - Run the setup.py script

        The setup.py script will create all Cubes, Rules, Dimensions, Subsets, Views, etc. that are required. Open setup.py and then update the TM1 settings which you will find at the top of the script:

        • address: Server name where TM1 is installed
        • port: HttpPortNumber value in tm1s.cfg
        • user: TM1 admin user name
        • password: TM1 password
        • ssl: UseSSL setting in tm1s.cfg
        IMPORTANT: Before running the script you should be aware that the Date, Version and City dimensions will be overwritten, if they already exist in the TM1 model!

        Now you are ready to run the script either with pycharm or from the command line. Open a Windows Command prompt window, navigate using the cd command to the setup.py folder location and then type:

        • python setup.py

        If the script fails, it is likely due to one of following reasons:

        After the script ran, you should now be able to see the following TM1 objects:

          You are now ready to run the first Jupyter notebook!

           

          3 - Run the Data Integration with TM1py Jupyter Notebook

          For demonstration and documentation purposes all the python code in this series is distributed through Jupyter Notebooks. Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.

          To start Jupyter Notebook, you need to type the following command line in the folder where the notebooks are located:

          • jupyter notebook

          Once it has started, you should be able to access the jupyter notebook via the following URL:

          • http://localhost:8888/tree

          You can then click the upload button on the top right corner and select the first notebook that we are going to use: Data Integration with TM1py.ipynb.
          Now click on the newly uploaded notebook and the notebook should be opened in a new tab:


          Step1: Import packages and define constants

          The first step in the Jupyter notebook is to import the packages and define the constants we need:


          Step 2: Build the URL Query

          We are pulling the weather data from the National Oceanic and Atmospheric Administation (NOAA). NOAA has a rich API which allows us to access all kind of environmental data from the US.

          We use the following code to build the URL query:

          url = 'https://www.ncdc.noaa.gov/cdo-web/api/v2/data?' \
                'datasetid=GHCND&' \
                'startdate=' + FROM + '&' \
                'enddate=' + TO + '&' \
                'limit=1000&' \
                'datatypeid=TMIN&' \
                'datatypeid=TAVG&' \
                'datatypeid=TMAX&' \
                'stationid=' + STATION
          
          print(url)

          Step2 - video


          Step 3: Query the Weather Data

          Now that our URL is ready, we need to send the request to the API:

          response = requests.get(url, headers=HEADERS).json()
          results = response["results"]   

          From the repsonse, we are storing the results into a new variable results. Then we print the first 3 objects:

          print(json.dumps(results[0:3], indent=2))

          Step 3 - video


          Step 4: Rearrange the data

          Before sending the data into TM1, we now need to rearrange the data so it matches our TM1 cube structure:

          • Version = Actual
          • Date = record['date'][0:10]
          • City = NYC
          • DataType = record['datatype']
          cells = dict()
          
          for record in results:
              value = record['value'] / 10
              coordinates = ("Actual", record['date'][0:10], "NYC", record['datatype'])
              cells[coordinates] = value

          Step 4 - video


          Step 5: Send data into TM1

          Now that the data is ready, we just need to connect to our TM1 instance and finally write the values into the TM1 cube "Weather Data". With TM1py we can do this in two lines of code:

          with TM1Service(address='10.77.19.60', port="12354", user="admin", password="apple", ssl=True) as tm1:
              tm1.cubes.cells.write_values("Weather Data", cells)

          Step 5 - video

          If you open the Weather Data Cube, you should be able to see some data for the first days of 2017:

          FAQ

          + What is TM1py ?

          TM1py is a Python package that wraps the TM1 REST API in a simple to use library. That makes it easy to build stuff with TM1 and python.

          + What is Python ?

          Python is a widely-used general-purpose programming language that lets you work quickly and integrate systems more effectively.

          + Why you should use TM1py ?

          • Use Pandas for Data Analysis and Statistics on top of your TM1 model.
          • Load data (e.g. FX, stock data) from web services into TM1 such as Quandl financial.
          • Build Machine Learning and Forecasting Algorithms with Python and scikit-learn based on your TM1 data.

          + TM1py prerequisites ?

          TM1 server should be on version 10.2.2 FP5 as minimum.

          + Do I need a license to use TM1py ?

          TM1py is a free and open source (MIT license) and there are no limitations of any kind. All of the code, documents and information in TM1py are to be deemed without warranty.

          + What TM1 security does TM1py support ?

          TM1py supports TM1 Security mode 1, 2, 4 and 5. TM1py supports CAM authentication which allows any LDAP directory to be used including the Active Directory.

          + How can I contribute?

          The source code is hosted at github.com/cubewise-code/tm1py. If you find a bug or feel like you can contribute please fork the repository, update the code and then create a pull request so we can merge in the changes.

          + How do I get started ?

          Just follow this Quick startup guide and you will be able to run your first Python script in less than 5 min.

          Set up jupyter notebook

          Jupyter is an open-source web application that enables you to mix code-execution and comments. It is great to tell a story and run through the code sections by sections.

          The main difference with pycharm is that jupyter allows you to execute your python scripts sections by sections.

          Why you should use it:

          • Jupyter notebook is a web app, that allows us to write code in a more narrative, interactive way.

          • You can share notebooks through mails or host them at a shared platform.

          • Jupyter Notebooks are a great, new way to tell a story with your TM1 Data.

          Download Jupyter

          Run jupyter notebook

          Run in a command line:

          • jupyter notebook

          Jupyter is running by default on the 8888 port. To access the notebook, go to the following url:

          • http://localhost:8888

           

          Check Connectivity with TM1

          In order to check if TM1py can connect to your TM1 instance, you can use the check.py Python script from the samples. First download the samples from Github and then navigate to the TM1py-samples folder that you downloaded from Github, in this example the samples are located in the following folder:

          • C:\TM1py\TM1py-samples-master

          Hold shift and right-click in the folder. Then select "Open command window here". This should open the command-line (or PowerShell depending on your Windows version). Now type "python check.py" into the console to execute the check.py script.

          • python check.py

          The script will ask you for:

          • TM1 User (Name of the TM1 User)

          • Password (The user's password. The command-line reads your input silently)

          • CAM Namespace:

          • Port (The HTTPPortNumber as specified in the TM1s.cfg)

          • address (Address of the TM1 instance. 'localhost' or '' if you run the TM1 instance locally)

          • ssl (True or False, as stated in the TM1s.cfg)

          TM1py will then try to connect to your TM1 instance.

          It will print out the name of the TM1 instance. If this works without Errors you should be able run any of the samples.

          If this does not work you should check:

          1. If the TM1 REST API is enabled

          2. Double check your credentials as the CAM Namespace is case sensitive

          All the samples are based on the Planning Sample TM1 model, that comes with the installation of TM1. The samples potentially contain hard coded references to TM1 objects (e.g. cube names). Make sure to adjust those references if you are not testing against the Planning Sample!

          Set up your development environment

          To use TM1py samples, you do not have to have a development environment, you can just edit the script in a text editor or in jupyter and then run the script. But if you want to dig deeper into the Python language, having a development environment will make your life easier.

          Why do you need a development environment?

          • If you want to explore the TM1py code.
          • See the TM1py scripts in multiple windows.
          • Edit and run your scripts in the same interface.
          • Code-completion and syntax highlightning.

          PyCharm

          PyCharm is likely the best IDE for Python. It offers intelligent code completion, on-the-fly error checking and heaps of other features. It allows you to save time and be more productive. IntelliJ offers a free Community Edition of PyCharm:

          https://www.jetbrains.com/pycharm/

          Once downloaded and installed with the default settings, you just need to locate the folder where you have downloaded the TM1py samples:

          pycharm2.png

          You are now good to go!

          The following video will give you a quick tour to the PyCharm main features:

          Run TM1py script from TM1 process

          This article explains the steps to run a TM1py script from TM1 process.

          1. Locate python.exe

          First you need to locate python.exe file. The first locations to look at are:

          • C:\Python36
          • C:\Users\(Your logged in User)\AppData\Local\Programs\Python\Python36

          If you can't find your python.exe location, you can just type in a command line:

          • where python

          In this example python.exe is located in C:\Python36\python.exe.

          2. Locate the Python script

          The script we want to run is 

          • C:\TM1py\TM1py-samples-master\Load Data\fx rates to cube.py

          3. Create TM1 process

          To run a python script from a TM1 process, we are going to use the ExecuteCommand function of the TM1process:

          • ExecuteCommand(CommandLine, Wait);

          The ExecuteCommand function has two parameters:

          • CommandLine will be the path of the python.exe and the script.
          • Set Wait to 1 if you want the TM1 process to wait for the script to finish Wait to 0, if you want the process to go straight to the next line.

          Create a new TM1 process and add the following code:

          pyExe = '"C:\Python36\python.exe"' ;
          
          pyFile = '"C:\TM1py\TM1py-samples-master\Load Data\fx rates to cube daily.py"' ;
          
          EXECUTECOMMAND(pyExe |' '| pyFile,1);

          Because there are spaces in the script name, we need to add double quotes "" inside the single quotes.

          Save the process and run it.

          If there is an error in the command line, you won't be able to see it from the TM1 process. Even if the TM1 process runs successfully, the command line could fail. You can check your command line in the command window:

          • "C:\Python36\python.exe" "C:\TM1py\TM1py-samples-master\Load Data\fx rates to cube daily.py"

          If you are planning to run multiple scripts, it is a good practice to store the file locations in a TM1 cube, so you can easily update the file locations if you need to.

          You can access the code of the script used in this example, fx rates to cube daily.py on Github.

          Create TM1 objects for TM1py samples

          All the TM1py scripts in the Load Data folder requires some TM1 objects. These scripts load data from different datasources into TM1 cubes. If you want to test these scripts without changing the target cubes, you will need to create these TM1 objects.
          To create these objects, you can run the script sample setup.py which is in the Load Data folder:

          1. Update TM1 instance settings

          Before running the script you open the sample setup.py script and update the TM1 instance connection settings:

          2. Run the script

          To run the script, just type the following command line:

          • python "sample setup.py"

          If the command line shows errors it could be either because you are not running the script in the right folder or tm1py can't connect to the TM1 instance. Check if the connection with TM1 works.

          3. Check the new TM1 objects

          if the script ran successfully, you will then be able to see the following three cubes in your TM1 instance: