Run processes in parallel using only one TM1 connection

Being able to run processes in parallel is one of the most powerful features of IBM TM1 and Planning Analytcs. Instead of loading one year of data with one process, you could run in parallel one process per month which will significantly speed up your loading time!

Run processes in parallel with TM1RunTI

Running processes in parallel is quite common now in IBM TM1 and Planning Analytics applications. The most common way to do it is to use TM1RunTI.

TM1RunTI is an exe file which is located in the TM1 bin installation folder. With a simple command line as below you can run multiple processes in parallel:

"C:\Program Files\Cognos\TM1\bin\tm1runti.exe" -process Cub.Flight -adminhost CW111 -server flightstats -user TI01 -pwd "" pYear=2000 pMonth=01
"C:\Program Files\Cognos\TM1\bin\tm1runti.exe" -process Cub.Flight -adminhost CW111 -server flightstats -user TI02 -pwd "" pYear=2000 pMonth=02
"C:\Program Files\Cognos\TM1\bin\tm1runti.exe" -process Cub.Flight -adminhost CW111 -server flightstats -user TI03 -pwd "" pYear=2000 pMonth=03

Manage running processes with Hustle

Hustle was created to manage the threads and making sure that you do not exceed the number of available threads on the server. Hustle uses TM1RunTI, therefore it has the same limitation, for each thread, it will first need to connect to TM1 to be able to execute it.

In certain applications, TM1 developers had to set up one different user per thread to avoid locking.

Run processes in parallel with only one connection

To avoid this situation RushTI was created:

RushTI does the same as Hustle except that it connects to TM1 only once at the beginning and then reuse the same connection to run all threads.

This article will explain you how to install and use RushTI.


  • TM1 10.2 FP5 as minimum

  • TM1py v1.0.3 as minimum

1. Install Python

RushTI is a Python script, to run it, you will first need to install Python. Steps to install Python can be found here.

2. Install TM1py

RushTI uses the free Python package TM1py, so the next step is to install the TM1py package. Open a command prompt and then type:

pip install TM1py

3. Download RushTI folder

RushTI is an open-source Python script which you can download from its github page:

In this example, the RushTI folder has been copied to the C:\RushTI folder:

4. Update config file

In the config file example, you will find two instances but you can add as many as you want, to add a TM1 instance just add the following information:

  • [instanceName]

  • address: IP address of the physical server where the TM1 instance is located

  • port: HttpPortNumber of the instance (tm1s.cfg)

  • namespace: (only required for CAM Security)

  • user: TM1 user

  • password: TM1 user password (encoded in Base 64)

  • ssl: True/False (depending useSSL setting in tm1s.cfg)

WARNING: The password needs to be encoded in Base 64.

5. Update tasks.txt

The file tasks.txt contains the list of all processes which are going to be executed by RushTI. In this example, the process Cub.GeneralLedger.LoadFromFile will be executed for every months:

instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=01
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=02
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=03
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=04
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=05
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=06
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=07
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=08
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=09
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=10
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=11
instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=12

It can be noted that in the same list, you can run processes from different TM1 instances by changing the instance parameter:

instance="tm1srv01" process="Cub.GeneralLedger.LoadFromFile" pMonth=01
instance="tm1srv02" process="Cub.GeneralLedger.LoadFromFile" pMonth=01

6. Run RushTI from command line

To run RushTI as a command line, open a command promtp and then use, the last parameter is the number of threads RushTI will run at the same time:

python Tasks.txt 4

If you are not in the RushTI folder, you will have to add the path to the script:

python C:\RushTI\ C:\RushTI\Tasks.txt 4

The following video shows the Pulse Live Monitor with 4 threads running at the same time:

Run RushTI from a TM1 process

To run RushTI from a TM1 process, you just need to use the ExecuteCommand function either in the Prolog or Epilog tab:

sCommand = 'python C:\RushTI\ C:\RushTI\Tasks.txt 4';


The second parameter is either 1 if you want to wait for the command to finish or 0 to termiate the process straight away.

Troubleshoot RushTI

If the threads are not running, the best place to start looking at, is the log file. RushTI will create a RushTI.log file in the same folder location as the

The most common errors are the following:

  1. RushTI can't connect to TM1 instance so you should check first the settings in the config.ini (Check connectivity with TM1).

  2. TM1py version should be v1.0.3 as minimum (Upgrade TM1py).

  3. Check parameters of the TM1 processes.

Setup Cubike example

Cubike is a fictional Bike Sharing company that we use the series of articless about Data Science with TM1 and Planning Analytics:

If you are new to TM1py, this article will guide you to setup TM1py and all the different modules required in the Cubike example.


The only prerequisite is to have IBM TM1 10.2.2 FP5 as minimum already installed. For this example we are going to work with the TM1 instance called tm1srv01:


1. Install Python

TM1py is a python package. To install TM1py, we need first to install Python. The steps to install Python can be found here:

Once Python has been installed, to check if the installation was successful. Open a command line as administrator:

and then type the following command to get the Python version:

Python --version

In this example the Python version is v3.6.2:


2. Install TM1py

First you need to navigate to the Scripts folder where Python has been installed. To find out the Python location, in a Windows command promt type:

where Python

It should return you the location:

To go to the Python location just type cd and then the folder location:

cd "C:\Python36\Scripts"

To install TM1py you just need the following command line:

pip install TM1py

To check if the installation was successful just type:

pip show TM1py

It is going to show you information about TM1py and the version number, in this example v1.0.2:


3. Enabling TM1 REST API

TM1py uses the TM1 REST API to connect to the TM1 instance. In this example, our tm1srv01 instance uses the port 17001 for the REST API. To find out which port your TM1 instance is using, open the tm1s.cfg file and check the HttpPortNumber value.

To enable the REST API, open a web browser and paste the following URL:

  • https://localhost:17001/api/v1/$metadata 

You should be prompted this warning message, click Advanced and then Proceed to localhost (unsafe):

You should then see the metadata information of your TM1 instance:


4. Test connection with TM1

Once the TM1 REST API has been enabled, the next step is to check if you have the right TM1 settings to connect to your TM1 instance. Just follow steps described in the following article:

After running the script from the TM1py samples, TM1py should be able to find your TM1 instance name:


5. Download Cubike example

All the files you need to go through the Cubike examples are gathered in one zip file which you can download here:

After unzipping this file, you should see two Jupyter notebooks, four csv files and one python script:

  • chicago.csv: Bike Sharing data from Chicago.+
  • Data Integration with TM1py.ipynb: Jupyter Notebook to load data into TM1 from a web service.
  • Data Science with TM1py.ipynb: Jupyter Notebook to analyse the data.
  • nyc.csv: Bike Sharing data from New Yord City.
  • public holidays.csv: Public Holidays data.
  • washington_dc.csv: Bike Sharing data from Washington DC.
  • Python script to create TM1 objects.

6. Install Jupyter Notebook

For demonstration and documentation purposes all the python code in this series is distributed through Jupyter Notebooks. Jupyter Notebooks allow you to create and share documents that contain live code, equations, visualizations and narrative text. A quick introduciton to Jupyter can be found here.

This example contains two Jupyter notebook (.ipynb). To open these two notebook, we have to install Jupyter first.

To install Jupyter, you just need to run the following command from the Scripts Python folder:

pip install jupyter

7. Open a Jupyter Notebook

To open Jupyter Notebook, you need to run Jupyter. To open a Windows command prompt, go to the folder location where you downloaded the Cubike example and then in the Windows Explorer bar, just type cmd and press enter:

To run Jupyter from this location, just type:

jupyter notebook

A new tab on your web browser should open and you should be able to see the two Jupyter Notebooks (.ipynb) file, just click on the first one to open it:

Once opened, you can then use the buttons at the top to navigate through the code's blocks. The most important button are the Up/Down arrow and the Run button which executes a code block and go to the next one:


Other Python packages required for Part 2 and Part 3 articles:

To run the Data Science with TM1py.ipynb, you will need to install in the Python folder, the following packages:

1) Ploty is a graphing library

pip install plotly

2) C++ Compiler is required by Facebook Prophet, you can download it from Build Tools for Visual Studio 2017 and run as administrator .exe file.

A restart of the server will be required.

3) Msgpack: (required by Facebook Prophet):

pip install msgpack-python

4) PyStan (required by Facebook Prophet)

pip install PyStan

5) Facebook Prophet is a forecasting tool.

pip install fbprophet

6) Matplotlib is a plotting library

python -mpip install -U matplotlib

Cubike Part1

You are now ready to start the Part 1 article:


Timeseries Forecasting with Facebook Prophet and TM1/Planning Analytics

Welcome to the last part of the articles series about Data Science with TM1/Planning Analytics and Python. In Part 1 we loaded weather data from the NOOA web service into our TM1 cubes. In Part 2, by analyzing the data with Pandas and Ploty, we've learned that 

  • There are strong seasonal trends throughout the year
  • Public Holidays and weekends have a negative impact on the bike shares
  • Temperature and Bike shares are strongly correlated in every city. 
  • The intensity with which temperature affects the bike shares varies by city. Washington DC is the city that is least affected by the weather.

Before going any further, we recommend you to read these two articles:


In this article, we are going to explain how to use Facebook's Prophet to create a two year demand forecast for bike sharing, based on four years of actuals from our TM1 cube.

Before we start with the implemenation let's quickly discuss what Prophet is.


The idea behind the prophet package is to decompose a time series of data into the following three components:

  • Trends: these are non-periodic and systematic trends in the data,
  • Seasonal effects: these are modelled as daily or yearly periodicities in the data (optionally also hourly), and
  • Holidays / one-off effects: one-off effects for days like: Black Friday, Christmas, etc.

Based on our historic data, Prophet fits a model, where each of these components contribute additively to the observed time series. In other words, the number of bike shares on a given day is the sum of the trend component, the seasonal component and the one-off effects.

Step 1: Data Wranging

In the Part 2 of this series, we already loaded the actuals from the Bike Sharing cube into Python. We called the variable df_b.

Before we can use this data to fit our Prophet model, we must make sure we arrange the data in a correct format.

The dataframe that Prophet needs has two columns:

  • ds: dates
  • y: numeric values

We execute the following code to arrange our dataframe.

df_nyc = df_b[city].reset_index()
df_nyc.rename(columns={'Date': 'ds', city: 'y'}, inplace=True)

We use the tail function on our dataframe (df_nyc) to display the last 5 rows of data:

Step 2: Fitting the model

Now that we have the data ready, and a high level understanding of the seasonal trends in our data, we are ready to fit our model!

First we need to instantiate Prophet. We are passing two arguments to the constructor of the Prophet model:

  • The public holidays that we want Prophet to take into account
    (they come from a TM1 cube through MDX. More details in the Jupyter notebook)
  • Whether or not Prophet should model intraday seasonality
m = Prophet(holidays = holidays, daily_seasonality=False)

Now we can fit our model, by executing the fit method on our model and passing the dataframe, that we arranged in step 1.;

This is where Prophet is actually doing all the hard work, the curve-fitting.
Under the hood Prophet uses Stan to run the statistical calculations as fast as possible.

Step 3: Use Facebook's Prophet to forecast the next 2 years

We can use the fitted Prophet model, to predict values for the future.

First we need to specify how many days we would like to forecast forward.
This code block creates a dataframe with the sized window of future dates.

future = m.make_future_dataframe(periods=365*2)

Then we use the predict function on our model. As the argument to that function, we pass the dataframe future.

forecast = m.predict(future)


    The forecast is ready. Let's look at what Prophet predicted !

    We select the following columns on the dataframe and print out the last 5 records::

    • ds (the date) 
    • yhat (the predicted value)
    • yhat_lower (the lower bound of the confidence interval)
    • yhat_upper (the upper bound of the confidence interval)

    The following code is going to print the last 5 records:

    forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()

    Step 4: Analysing the forecast

    We can interrogate the model a bit to understand what it is doing. The best way to do this is to see how the model fits existing data and what the forecast looks like. This is shown in the plot below.

    The black dots correspond to the historic number of bike shares each day (2014-2018). The dark blue line represents the estimated number of shares, projected with the fitted model.
    The light blue lines correspond to the 80% confidence interval for the models predictions.

    Judging visually, the model has done a good job of picking up the yearly seasonality and the overall trend. The forecast for 2019 and 2020 looks plausible!

    To get an even further understanding of our fitted model, we can plot each of the model components. This is shown in the plot below.

    In the top panel we see the linear growth term. This term contains changepoints (either determined independently by Prophet or preset by the user) so that the rate of growth is allowed to vary over time. The second panel shows the effect that public holidays have on our bike shares. The final two panels show the estimated yearly and weekly trends of the model:


      Conclusion on this analysis:

      • An overall global trend of growth from 2015, that slowed down slightly after 2016.
      • Public holidays lead to a fall in the usage of the bikes
      • A strong weekly seasonality: Our bikes are used mostly during the week – presumably for commuting.
      • A strong yearly seasonality with a peak in summer/ automn and a drop in winter.

      Step 5: The last step is to send the data back to TM1

      Before sending the data back to TM1, we need to rearrange the data so it matches the dimensions in our cube:

      • Version: Prophet Forecast
      • Date: date
      • City: city
      • Bike Shares Measures:
        • Count for yhat
        • Count Lower for yhat_lower
        • Count Upper for yhat_upper

      To rearrange the data for TM1 we execute the following code.

      cells = {}
      for index, row in forecast.iterrows():
          date = str(row['ds'])[0:10]
          cells['Prophet Forecast', date, city, 'Count'] = round(row['yhat'])
          cells['Prophet Forecast', date, city, 'Count Lower'] = round(row['yhat_lower'])       
          cells['Prophet Forecast', date, city, 'Count Upper'] = round(row['yhat_upper'])

      Once our data set is ready, we use the TM1py function tm1.cubes.cells.write_values to send the data to our cube Bike Shares:

      tm1.cubes.cells.write_values('Bike Shares', cells)

      Let's check in the cubeview, years from 2014 to 2017 are actual data and the forecast starts in 2018:

      See Prophet in action with Jupyter

      Explore your TM1/Planning Analytics data with Pandas and Ploty

      Welcome to the second part of the Data Science with TM1/Planning Analytics article. In the Part 1 article, we uploaded in our TM1 cubes the weather data from a web service. Now that we have all the data we need in TM1, we can start to analyse it. The Python community provides lots of tools which make data science easy and fun.

      Why using Python for Data Science?

      • Top-notch, free data analysis libraries.
      • Free (and good) libraries to access data from systems or web.
      • Lots of research in new disciplines like, Machine Learning (Tensorflow) or Natural Language Processing (NLTK).
      • TM1 (as the Information Warehouse) is the ideal data source for Data Science

      Before going through this article, you should read first, Part 1:


      The objective of this article is to explore the impact of seasonal effects, weather and public holidays on our operative business. To do that we are going to follow these steps:

      1. Load and visualize monthly bike shares by city
      2. Explore seasonal and regional trends
      3. Analyze relationship between average temperatures and bike shares by day
      4. Analyze the impact of Non-working days vs. Working days.

      We created a Jupyter notebook to guide you through these steps. The notebook that we are going to use in this article is Data Science with TM1py.ipynb, you can find it in the file which you can download in the Part 1 article.


      To run this example on your machine, you will have to make sure first that you have installed the following components:

      Step 1:  Load, visualize 2017 monthly bike shares by city

      The first step is to define the TM1 connection settings which you find at the top of the notebook:

      ADDRESS = "localhost"
      PORT = 8009
      USER = "admin"
      PWD = ""
      SSL = True

      Before we start with the analysis, we first need to bring data from TM1 into our notebook.

      We start with data from the view 2017 Counts by Month of the Bike Shares cube.
      To load data from the cube view into python we execute the following code:

      cube_name = 'Bike Shares'
      view_name = '2017 Counts by Month'
      data = tm1.cubes.cells.execute_view(cube_name=cube_name, view_name=view_name, private=False)

      Our cellset, given back by TM1, is stored in the variable data. We use a TM1py function to convert the cellset into a pandas dataframe, which we call df:

      df = Utils.build_pandas_dataframe_from_cellset(data, multiindex=False)

      We now need to rearrange the dataframe. We need to reduce the dimensionality of our data from 4 (Time, Version, City, Measure) to 2 (Time, City). This should make our life easier down the road.

      df['Values'] = df["Values"].replace(np.NaN, 0)
      for city in ('NYC', 'Chicago', 'Washington'):
          df[city] = df.apply(lambda row: row["Values"] if row["City"] == city else None, axis=1)
      df.drop(columns=["Values"], inplace=True)
      df = df.groupby("Date").sum()

      To show the rearranged dataframe, we can just type df into a jupyter cell and execute it.

      Let's plot a barchart from our dataframe, to explore and understand the monthly distributions throughout the cities, visually.

      Using the popular charting library Plotly, we can create an interactive barplot with just 7 lines of python code:

      cities = ('NYC', 'Chicago', 'Washington')
      # define Data for plot
      data = [go.Bar(x=df.index, y=df[city].values, name=city) for city in cities]
      # define Layout. stack vs. group !
      layout = go.Layout(
          title="Bike Shares 2017"
      # plot
      fig = go.Figure(data=data, layout=layout)

      As you can see below the Ploty package creates an interactive bar chart, you can untick a city and zoom into a specific chart area:

      Step1 - Video

      Step 1 - Conclusion

      As expected, the seasons have a massive impact on our bike sharing business. In the warmer months we have substantially more usage than in colder months.

      Also interesting is that the seasons seem to impact cities differently. While the relation between Summer and Winter months in NYC and Washington DC is approximately 1/2, in Chicago it is close to 1/5!

      Let's dig deeper into the relationships between the cities and the Temperature!

      Step 2: Explore seasonal and regional trends

      In order to get a clearer picture, instead of just looking at monthly figures of 2017, we are going to bring in the daily data from 2014 to 2017.

      We execute the following code to create a new dataframe df_b based on the data coming from the view 2014 to 2017 Counts by Day:

      cube_name = 'Bike Shares'
      view_name = '2014 to 2017 Counts by Day'
      data = tm1.cubes.cells.execute_view(cube_name=cube_name, view_name=view_name, private=False)
      df_b = Utils.build_pandas_dataframe_from_cellset(data, multiindex=False)
      df_b['Values'] = df_b["Values"].replace(np.NaN, 0)
      # Rearrange content in DataFrame
      for city in ('NYC', 'Chicago', 'Washington'):
          df_b[city] = df_b.apply(lambda row: row["Values"] if row["City"] == city else None, axis=1)
      df_b.drop(columns=["Values"], inplace=True)
      df_b = df_b.groupby("Date").sum()

      Let's print 5 sample records from the dataframe using df_b.sample(5):

      Pandas dataframes come with very handy and easy to use tools for data analysis. To calculate the correlation between the different columns (cities) in our dataframe, we can just call the corr function on our dataframe. df_b.corr():

      As one would expect, the bike shares by day across our three cities are all strongly correlated.

      To analyze the relationship between the average temperature and the bike shares  by city, we need to query the daily historic average temperatures from our TM1 cube into python.

      We execute the following code to create a dataframe (we call it: df_w) based on the cubeview 2014 to 2017 Average by Day of the Weather Data cube:

      cube_name = 'Weather Data'
      view_name = '2014 to 2017 Average by Day'
      data = tm1.cubes.cells.execute_view(cube_name=cube_name, view_name=view_name, private=False)
      df_w = Utils.build_pandas_dataframe_from_cellset(data, multiindex=False)

      To calculate the correlation between the two dataframes (df_w and df_b) we can use the corrwith function


      Step 2 - Video

      Step 2 - Conclusion

      Temperature and Bike shares are strongly correlated in every city. 
      For the forecasting (part 3 of this series) to be effective we will need a model that can take seasonal effects (e.g. temperature) into account.

      The intensity with which temperature affects the bike shares varies by city.
      For the forecasting we will need to create different models by city.

      Step 3: Analyze relationship between average temperature and bike shares by day

      Let's visualize the relationship between temperature and bike shares in a Scatterplot.
      From our two dataframes: df_w (average temperature by day) and df_b (bike shares per day) we can create a scatterplot in just a few lines of code:

      cities = ('NYC', 'Chicago', 'Washington')
      colors = ( 'rgba(222, 167, 14, 0.5)','rgba(31, 156, 157, 0.5)', 'rgba(181, 77, 52, 0.5)')
      # Scatterplot per city
      data = [go.Scatter(
          x = df_w[city].values,
          y = df_b[city].values,
          mode = 'markers',
          marker = dict(
              color = color
          text= df_w.index,
          )for (city, color) in zip (cities, colors)]
      # Plot and embed in jupyter notebook!

      Step 3 - video

      Step 3 - Conclusion

      Analyzing the plot visually we make a few statements:

      • Among the three cities, the distribution in Chicago is the closest to a linear model .
        Judging visually, we could draw a neat line through that point cloud
      • For Washington DC we can recognize an interseting trend, that for temperatures of approx. 25 degrees and higher the bike count stagnates.
      • The distribution for NYC is less homogeneous. A simple linear model would not sufficiently explain the bike shares count.

      Let's quantify those finding and take a closer look at the how non-working days impact our operative business.

      Step 4: Analyze the impact of Non-working days vs. Working days.

      To analyze the impact of Public holidays and weekends, we will focus on one city at a time. Let's start with NYC.

      First we want to create a linear regression between the average temperatures and the bike shares for NYC. 

      To calculate the fitted line we use the linregress function from the popular Scipy stats module.

      Note that the function not only returns us the slope and intercept of the fitted line, but also three measures (R squared, P Value and the Standard Error), that quantify how well the fitted line matches the observations.

      x, y = df_w[city].values, df_b[city].values
      slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)

      Now we need to query Public Holidays and Weekends from TM1 through two small MDX Queries, and merge them into a list. This list we call non_work_days.

      mdx = "{ FILTER ( { TM1SubsetAll([Date]) }, [Public Holidays].([City].[NYC]) = 1) }"
      public_holidays = tm1.dimensions.execute_mdx("Date", mdx)
      mdx = "{FILTER( {TM1SUBSETALL( [Date] )}, [Date].[Weekday] > '5')}"
      weekends = tm1.dimensions.execute_mdx("Date", mdx)
      non_work_days = public_holidays + weekends

      Now we can create a new scatterplot, that includes the fitted line (orange), the working days (blue) and the non-working days (green).

      When we repeat this exercise for Chicago and Washington DC we see a similar picture:
      The fitted line matches the points more (Chicago, Washington DC) or less (NYC) good and the majority of the green points lay underneath the fitted line.

      Step 4 - video

      Step 4 - Conclusion

      In all three cities the vast majority of the green points lay under the fitted line.
      On Non-working days there is generally less usage than our fitted linear regression model (Temperature ~ Bike Count) predicts.

      For the forecasting (part 3 of this series) to be effective we will need to take weekdays and public holidays into account.

      Next step: Timeseries forecasting

      The last part of this series is about timeseries forecasting with TM1py. We are going to use Prophet to forecast the bike shares in the future:

      Upload weather data from web services into TM1/Planning Analytics

      Python is a widely-used general-purpose programming language that lets you work quickly and integrate systems more effectively. TM1py enables you for the first time to get easily data from a multitude of systems and web services such as Trello, Salesforce, financial web services and many more... into your TM1/Planning Analytics application.

        This help article is the first one on the series of articles focus around a fictional Bike Sharing company called Cubike operating in New York City, Washington D.C and Chicago.
        The data for the series is real historic data provided from Bike Sharing companies in North America: DivvyCapital Bikeshare and Citi Bike.


        The objective of these articles is to explore and understand the impact of the weather and public holidays on our operative (bike sharing) business and subsequently use modern timeseries forecasting techniques to generate a rough two year forecast.
        In this first article we will setup our working environment and upload weather data from a web service:

        1. Execute a python script that builds all the required TM1 objects for the series
        2. Write a python script to load daily historic weather data (Min, Max and Average by city) from a web service into a TM1 cube

        If you are new to TM1py, we recommend you to start with the following article which is going to guide you in the installation of Python, TM1py and Jupiter:


        In order to run this sample you will need to install on your machine:

        • IBM TM1 10.2 FP5 as minimum or Planning Analytics.
        • Python, an extremely useful programming language for everything that is related to data.
        • Pandas, a library which provides data structures and data analysis tools.
        • TM1py, the python package for TM1.
        • Jupyter (How to setup jupyter notebook).

        1 - Download the Cubike sample content

        We gathered all the files you need to run this example in the following zip file: 

        After downloading the zip file from above, unzip this file in any folder, you should see two Jupyter notebooks (.ipynb), four csv files and one python script:

        • chicago.csv: Bike Sharing data from Chicago.
        • Data Integration with TM1py.ipynb: Jupyter Notebook to load data into TM1 from a web service.
        • Data Science with TM1py.ipynb: Jupyter Notebook to analyse the data.
        • nyc.csv: Bike Sharing data from New Yord City.
        • public holidays.csv: Public Holidays data.
        • washington_dc.csv: Bike Sharing data from Washington DC.
        • Python script to create TM1 objects.

        2 - Run the script

        The script will create all Cubes, Rules, Dimensions, Subsets, Views, etc. that are required. Open and then update the TM1 settings which you will find at the top of the script:

        • address: Server name where TM1 is installed
        • port: HttpPortNumber value in tm1s.cfg
        • user: TM1 admin user name
        • password: TM1 password
        • ssl: UseSSL setting in tm1s.cfg
        IMPORTANT: Before running the script you should be aware that the Date, Version and City dimensions will be overwritten, if they already exist in the TM1 model!

        Now you are ready to run the script either with pycharm or from the command line. Open a Windows Command prompt window, navigate using the cd command to the folder location and then type:

        • python

        If the script fails, it is likely due to one of following reasons:

        After the script ran, you should now be able to see the following TM1 objects:

          You are now ready to run the first Jupyter notebook!


          3 - Run the Data Integration with TM1py Jupyter Notebook

          For demonstration and documentation purposes all the python code in this series is distributed through Jupyter Notebooks. Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.

          To start Jupyter Notebook, you need to type the following command line in the folder where the notebooks are located:

          • jupyter notebook

          Once it has started, you should be able to access the jupyter notebook via the following URL:

          • http://localhost:8888/tree

          You can then click the upload button on the top right corner and select the first notebook that we are going to use: Data Integration with TM1py.ipynb.
          Now click on the newly uploaded notebook and the notebook should be opened in a new tab:

          Step1: Import packages and define constants

          The first step in the Jupyter notebook is to import the packages and define the constants we need:

          Step 2: Build the URL Query

          We are pulling the weather data from the National Oceanic and Atmospheric Administation (NOAA). NOAA has a rich API which allows us to access all kind of environmental data from the US.

          We use the following code to build the URL query:

          url = '' \
                'datasetid=GHCND&' \
                'startdate=' + FROM + '&' \
                'enddate=' + TO + '&' \
                'limit=1000&' \
                'datatypeid=TMIN&' \
                'datatypeid=TAVG&' \
                'datatypeid=TMAX&' \
                'stationid=' + STATION

          Step2 - video

          Step 3: Query the Weather Data

          Now that our URL is ready, we need to send the request to the API:

          response = requests.get(url, headers=HEADERS).json()
          results = response["results"]   

          From the repsonse, we are storing the results into a new variable results. Then we print the first 3 objects:

          print(json.dumps(results[0:3], indent=2))

          Step 3 - video

          Step 4: Rearrange the data

          Before sending the data into TM1, we now need to rearrange the data so it matches our TM1 cube structure:

          • Version = Actual
          • Date = record['date'][0:10]
          • City = NYC
          • DataType = record['datatype']
          cells = dict()
          for record in results:
              value = record['value'] / 10
              coordinates = ("Actual", record['date'][0:10], "NYC", record['datatype'])
              cells[coordinates] = value

          Step 4 - video

          Step 5: Send data into TM1

          Now that the data is ready, we just need to connect to our TM1 instance and finally write the values into the TM1 cube "Weather Data". With TM1py we can do this in two lines of code:

          with TM1Service(address='', port="12354", user="admin", password="apple", ssl=True) as tm1:
              tm1.cubes.cells.write_values("Weather Data", cells)

          Step 5 - video

          If you open the Weather Data Cube, you should be able to see some data for the first days of 2017:


          + What is TM1py ?

          TM1py is a Python package that wraps the TM1 REST API in a simple to use library. That makes it easy to build stuff with TM1 and python.

          + What is Python ?

          Python is a widely-used general-purpose programming language that lets you work quickly and integrate systems more effectively.

          + Why you should use TM1py ?

          • Use Pandas for Data Analysis and Statistics on top of your TM1 model.
          • Load data (e.g. FX, stock data) from web services into TM1 such as Quandl financial.
          • Build Machine Learning and Forecasting Algorithms with Python and scikit-learn based on your TM1 data.

          + TM1py prerequisites ?

          TM1 server should be on version 10.2.2 FP5 as minimum.

          + Do I need a license to use TM1py ?

          TM1py is a free and open source (MIT license) and there are no limitations of any kind. All of the code, documents and information in TM1py are to be deemed without warranty.

          + What TM1 security does TM1py support ?

          TM1py supports TM1 Security mode 1, 2, 4 and 5. TM1py supports CAM authentication which allows any LDAP directory to be used including the Active Directory.

          + How can I contribute?

          The source code is hosted at If you find a bug or feel like you can contribute please fork the repository, update the code and then create a pull request so we can merge in the changes.

          + How do I get started ?

          Just follow this Quick startup guide and you will be able to run your first Python script in less than 5 min.

          Set up jupyter notebook

          Jupyter is an open-source web application that enables you to mix code-execution and comments. It is great to tell a story and run through the code sections by sections.

          The main difference with pycharm is that jupyter allows you to execute your python scripts sections by sections.

          Why you should use it:

          • Jupyter notebook is a web app, that allows us to write code in a more narrative, interactive way.

          • You can share notebooks through mails or host them at a shared platform.

          • Jupyter Notebooks are a great, new way to tell a story with your TM1 Data.

          Download Jupyter

          Run jupyter notebook

          Run in a command line:

          • jupyter notebook

          Jupyter is running by default on the 8888 port. To access the notebook, go to the following url:

          • http://localhost:8888


          Check Connectivity with TM1

          In order to check if TM1py can connect to your TM1 instance, you can use the Python script from the samples. First download the samples from Github and then navigate to the TM1py-samples folder that you downloaded from Github, in this example the samples are located in the following folder:

          • C:\TM1py\TM1py-samples-master

          Hold shift and right-click in the folder. Then select "Open command window here". This should open the command-line (or PowerShell depending on your Windows version). Now type "python" into the console to execute the script.

          • python

          The script will ask you for:

          • TM1 User (Name of the TM1 User)
          • Password (The user's password. The command-line reads your input silently)
          • CAM Namespace:
          • Port (The HTTPPortNumber as specified in the TM1s.cfg)
          • address (Address of the TM1 instance. 'localhost' or '' if you run the TM1 instance locally)
          • ssl (True or False, as stated in the TM1s.cfg)

          TM1py will then try to connect to your TM1 instance.

          It will print out the name of the TM1 instance. If this works without Errors you should be able run any of the samples.
          All the samples are based on the Planning Sample TM1 model, that comes with the installation of TM1. The samples potentially contain hard coded references to TM1 objects (e.g. cube names). Make sure to adjust those references if you are not testing against the Planning Sample!

          Set up your development environment

          To use TM1py samples, you do not have to have a development environment, you can just edit the script in a text editor or in jupyter and then run the script. But if you want to dig deeper into the Python language, having a development environment will make your life easier.

          Why do you need a development environment?

          • If you want to explore the TM1py code.
          • See the TM1py scripts in multiple windows.
          • Edit and run your scripts in the same interface.
          • Code-completion and syntax highlightning.


          PyCharm is likely the best IDE for Python. It offers intelligent code completion, on-the-fly error checking and heaps of other features. It allows you to save time and be more productive. IntelliJ offers a free Community Edition of PyCharm:

          Once downloaded and installed with the default settings, you just need to locate the folder where you have downloaded the TM1py samples:


          You are now good to go!

          The following video will give you a quick tour to the PyCharm main features:

          Run TM1py script from TM1 process

          This article explains the steps to run a TM1py script from TM1 process.

          1. Locate python.exe

          First you need to locate python.exe file. The first locations to look at are:

          • C:\Python36
          • C:\Users\(Your logged in User)\AppData\Local\Programs\Python\Python36

          If you can't find your python.exe location, you can just type in a command line:

          • where python

          In this example python.exe is located in C:\Python36\python.exe.

          2. Locate the Python script

          The script we want to run is 

          • C:\TM1py\TM1py-samples-master\Load Data\fx rates to

          3. Create TM1 process

          To run a python script from a TM1 process, we are going to use the ExecuteCommand function of the TM1process:

          • ExecuteCommand(CommandLine, Wait);

          The ExecuteCommand function has two parameters:

          • CommandLine will be the path of the python.exe and the script.
          • Set Wait to 1 if you want the TM1 process to wait for the script to finish Wait to 0, if you want the process to go straight to the next line.

          Create a new TM1 process and add the following code:

          pyExe = '"C:\Python36\python.exe"' ;
          pyFile = '"C:\TM1py\TM1py-samples-master\Load Data\fx rates to cube"' ;
          EXECUTECOMMAND(pyExe |' '| pyFile,1);

          Because there are spaces in the script name, we need to add double quotes "" inside the single quotes.

          Save the process and run it.

          If there is an error in the command line, you won't be able to see it from the TM1 process. Even if the TM1 process runs successfully, the command line could fail. You can check your command line in the command window:

          • "C:\Python36\python.exe" "C:\TM1py\TM1py-samples-master\Load Data\fx rates to cube"

          If you are planning to run multiple scripts, it is a good practice to store the file locations in a TM1 cube, so you can easily update the file locations if you need to.

          You can access the code of the script used in this example, fx rates to cube on Github.

          Create TM1 objects for TM1py samples

          All the TM1py scripts in the Load Data folder requires some TM1 objects. These scripts load data from different datasources into TM1 cubes. If you want to test these scripts without changing the target cubes, you will need to create these TM1 objects.
          To create these objects, you can run the script sample which is in the Load Data folder:

          1. Update TM1 instance settings

          Before running the script you open the sample script and update the TM1 instance connection settings:

          2. Run the script

          To run the script, just type the following command line:

          • python "sample"

          If the command line shows errors it could be either because you are not running the script in the right folder or tm1py can't connect to the TM1 instance. Check if the connection with TM1 works.

          3. Check the new TM1 objects

          if the script ran successfully, you will then be able to see the following three cubes in your TM1 instance:

          Generate MDX Queries form Cubeview

          MDX is a fast and powerful way to query data from TM1 Cubes. It can play a key role when you combine your TM1 models with other technologies (e.g. Canvas, R, Python).

          Example of MDX Query:

          SELECT NON EMPTY TM1SubsetToSet([Account],"Gross Margin") on ROWS, 
            NON EMPTY TM1SubsetToSet([Period],"All Periods") on COLUMNS  
            FROM [General Ledger] 
          WHERE ([Version].[1],[Year].[2013],[General Ledger Measure].[Amount],[Currency].[Local],[Region].[1],[Department].[1])

          How do we write MDX Queries? TM1py can do this for you!

          TM1py can generate MDX Queries from existing cube views in your TM1 model. So, you don’t have to write them yourself.

          This can be a great way to get started with MDX or to save time and be more productive.

          This article explains the steps to create MDX query from an existing cubeview. For this example we will use the Gross Margin by periods view of the General Ledger cube:

          The TM1py script which generates the MDX is called 

          If you haven't downloaded the samples, you can download the script here.

          2. Update the script

          You need to update the TM1 connection parameters and then the cube and view name:

          3. Run the script

          Before running the script, you should make sure that TM1py is installed and that TM1py can connect to your TM1 instance.

          To run the script open a command line, go to the script folder, in this example C:\TM1py\TM1py-samples-master\Other and then type:

          • python "generate mdx from native"

          TM1py is going to print in the command line the MDX query:

          SELECT NON EMPTY TM1SubsetToSet([Account],"Gross Margin") on ROWS, 
            NON EMPTY TM1SubsetToSet([Period],"All Periods") on COLUMNS  
            FROM [General Ledger] 
          WHERE ([Version].[1],[Year].[2013],[General Ledger Measure].[Amount],[Currency].[Local],[Region].[1],[Department].[1])

          4. Print the MDX query into a file

          To print the MDX query in a file you can replace the following code:



          with open('output.csv','w') as file:
                  print(dim, file=file)

          The script should look like this:

          If you run the script again in the command line:


          A file containing the MDX query will be created in the same folder where the script is located:

          Find unused dimensions

          Ever wondered which of the dimensions in your TM1 Model are not used in cubes?

          TM1py can help to answer this questions with 8 Lines of code!

          If you have previously downloaded all samples folder, the script "find unused" is located in the Other folder:

          If you want to only download this script, you can find the code on github.

          1. Check connection with TM1

          Before running the script you should make sure that the connection with your TM1 instance is working:

          2. Update the script

          Open the python script with a text editor and update the TM1 connection information (address, port, user, password, ssl):

          3. Run the script

          Open a command line from where the script is located and then type:

          • python "find unused"

          By default it is going to print the list of all dimensions in the command line.

          4. Print the list of dimensions into a file

          To print this list of dimension in a file you can replace the following code:



          with open('output.csv','w') as file:
              for dim in unused_dimensions:
                  print(dim, file=file)

          Your code should look like this:

          Save the script and if you run the same command line again:


          The script will create the file in the same location where the script is located:

          Open the file and you will see the list of all dimensions:

          Be careful before deleting a dimension

          This script will give you the list of all dimensions which are not used in a cube. But unused in a cube does not mean that the dimension is not used by something else. For example the dimension could be used in a TM1 process or TM1 rule.

          To find all unused dimensions using a TM1 process, you can find the steps in this IBM guide:

          Cleanup your TM1 application

          Sometimes our TM1 models (especially non-production environments) become messed up with temporary and unwanted objects, like

          •  subsets and views that were used in TI processes, but not properly deleted
          • temporary processes that were used to test stuff
          • cubes and dimensions for prototyping

          In many cases we do use certain naming conventions when we create temporary objects, like we prefix all the processes with ‘temp_’ or ‘test’. It can be a tedious task though, to walk through your model object by object and delete stuff that matches a certain naming convention.

          Writing a TM1 process which loops through TM1 objects

          Writing a TI process on the other Hand can be cody and might rely on the data directory (which is not a sustainable approach anymore, considering recent developments in the TM1 server, such as Encryption for Data at Rest)

          Neither option is convenient. TM1py can help!

          Through TM1py you can Read, Update, Create and Delete all kind of TM1 objects.
          When we combine this with the power of Regular Expressions we can write scripts that clean our models from all temporary or unwanted objects with the ease of python and regular expressions.

          This article explains how to run the TM1py scrip cleanup If you have downloaded the TM1py samples, you can find this script in the folder called Other:

          The objective is to delete all TM1 objects starting with TM1py in the Canvas Sample TM1 instance:

          1. Update TM1 settings and the regular expression

          Open the script with any text editor or using PyCharm, update the TM1 connection settings (address, port, user, password, ssl). Before running the script you should make sure that TM1py can connect to TM1 instance.

          To delete all TM1 objects starting with TM1py we are using the following regular expression:

          • regex_list = ['^TM1py*']

          Your script should look like this:

          2. Run the script

          In a command line, navigate to the folder where the script is located and then type:

          • python "cleanup"

          If you check now your TM1 instance, all TM1 objects starting with TM1py should have been deleted.

          Upload Exchange rate from a Webservice

          Your TM1 models can get great benefits from external information, e.g., FX rates, weather forecast, data on commodity prices or consumer confidence. Webservices like Yahoo Finance, Bloomberg or FRED, offer a range of financial and non-financial information. Many of them are Free to use.

          How do we bring this data into our TM1 model? TM1py can help us to do this.

          What TM1py does:

          The load process generally consist of 3 steps

          1. Load raw data from a webservice (e.g. Yahoo Finance, Quandl)
          2. Transform the data
          3. Push the transformed data into TM1

           This article explains the steps to load FX rates from the FRED to a TM1 cube.

          What is the FRED?

          The Federal Reserve Bank of St.Louis is a financial institution which offers lots of financial data such as all exchange rates. In this example we are interested to the exchange rate from USD to JPY:


          This sample uses pandas to get the data from the FRED, before running the script you need to install pandas python module. In a command line type:

          • pip install pandas

          Once pandas has been installed, you need to install the pandas_reader module. To install it, run in a command line:

          • pip install pandas_datareader

          1. Update TM1 instance settings

          Before running the Load Data\fx rates to cube script you need to update the TM1 credentials:

          This script is going to load the echange rates to TM1py FX Rates cube, to create this cube you need to run the sample script.

          2. Run the script

          After checking the connectivity with your TM1 instance, open a command line inside the Load Data folder and type the following command:

          • python "fx rates to cube"

          3. Check TM1py FX Rates

          This script will load daily exchange rates from 1990/01/01 to 2041/01/01in the TM1py FX Rates cube:

          4. Load other Exchange rates

          If you are interesting about other exchange rate, you can see all available Exchange rate in this link:

          For example if you want to load the the Exchange rate USD To Euro, you need to get the index DEXUSEU:

          To run this script from a TM1 process you should check the following help article:

          Check TM1py version

          Check TM1py version using the command line

          To check the TM1py version installed on your machine, just open a command window and then type:

          • pip show TM1py

          In the example above the TM1py version is 0.1.1

          Check TM1py version using PyCharm

          If you are using PyCharm, to check the TM1py version you need to go to File > Settings, click on your project and you should see all the packages that the package is using. In the example below the TM1py version is 1.0.1:

          Install TM1py

          TM1py is a Python package that wraps the TM1 REST API in a simple to use library. That makes it easy to build stuff with TM1 and python.


          • TM1 10.2.2 FP5 as minimum

          It is very easy to set up TM1py, you just need to follow these three steps:

          1. Install Python
          2. Enable the TM1 REST API
          3. Install TM1py package

          All these steps are defined in the TM1py-sample github page: