Cubewise Arc: approved by TM1 developers around the world!

The TM1 Story

IBM Planning Analytics, powered by the TM1 engine, is famous for its speed, scalability and world-class modeling capabilities. Over many years, the TM1 engine has been dramatically improved; with the advent of Arc, TM1 now boasts a modern developer experience befitting its status.

The latest TM1 chapter: Cubewise Arc

Aware of a growing market demand for a modern TM1 developer tool, Cubewise gathered input from its global team of TM1 consultants on what the “ultimate” TM1 developer tool would look like. The result is Cubewise Arc, developed by and for professional TM1 developers.

Arc is now in use in 26 countries

The response to Arc from the global TM1 community has been nothing less than amazing! Only a few months after its official release, the Arc 1.1 trial has been downloaded in 26 countries, often followed by enthusiastic feedback from developers who have discovered its many benefits.

Over 40 TM1 developers have contributed…

… to the Arc project on GitHub.

One of the things that makes Arc special is that it is built by TM1 developers who have invited the entire TM1 community “behind the curtain” of product development.

Cubewise has made our support tickets public on GitHub, and anyone can go to the Cubewise CODE/arc-issues repository and post a bug, ask a question or request a new enhancement.

Among the 100+ TM1 developers already using Arc, 40+ have already contributed, resulting in over 240+ closed tickets:

Try Arc today!

If you are a professional TM1 developer who wants to build applications faster, easier and with higher quality, an Arc download is just a click away at arc-download.

Arc does not require elevated desktop privileges – simply run the arc.exe file and you will be up and running in no-time.

Cubewise looks forward to you joining the growing community of enthusiastic Arc users, and we are eager to hear your feedback!

How to create a Planning Analytics Hierarchy with TurboIntegrator

IBM Planning Analytics (PA) introduced a new layer in the query engine called a Hierarchy.

If you are not familiar with the PA Hierarchies, please read this blog article first: Mastering Hierarchies in IBM TM1 and Planning Analytics

Now that you know what a Hierarchy is and why you would use them (e.g. to make your applications “future-proof”), this article provides a step-by-step guide to building a new hierarchy with a TurboIntegrator process. This article will use Cubewise’s Arc IDE (Integrated Development Environment) to accomplish this, but you will be able to employ these concepts with TI editor in Architect and Planning Analytics Workspace.

New TI functions

To manipulate Hierarchies in TI, IBM Planning Analytics has introduced a set of new TurboIntegrator FunctionsIBM has ensured these new functions are very similar to the TI functions for dimensions, and there is usually a 1-1 corresponding function. For example, you would use HierarchyExists (check if a hierarchy exists) instead of DimensionExists (check if a dimension exists).

If you are using Cubewise Arc, your learning curve for Hierarchies will be easier as all the new Hierarchy functions are available as code snippets in the Arc’s TI editor:

See it in action!

This 5 min video will show you how to build a hierarchy from scratch with a TM1 process:

For more details about how to build a new Hierarchy using a process with Arc, you should have a look at following step-by-step guide:

Try Arc now!

Arc comes with a three-months trial period and there is no installation required. Just download Arc and doublie click on arc.exe file, you will be coding in no-time:

Arc v1.1 is now available to download

After months of development and feedback from customers and consultants all around the world…. Arc v1.1 is now available to download!

The full list of enhancements and fixes can be found in the v1.1.0 release notes on GitHub.

What is new in v1.1:

Manage TM1 security has never been easier

A brand new interface to manage security has been added. It will be now easier and faster to:

  • Search for a specific user or group.

  • Create new user or group.

  • Clone a security group to create a new group with the same security.

Major overhaul of the dimension / hierarchy editor

The Hierarchy editor has been significantly improved:

  • Changes to the hierarchy are now immediate.

  • Delete/Copy/Paste multiple leaf elements at the same time.

  • Copy a list of elements from Excel, Paste into the new element input box and then click Add!

Find/Replace accross all TI code tabs.

Searching or replacing a string in your code happens now accross all tabs (Prolog, Metadata, Data and Epilog). Click on any search result and Arc will bring you to the exact code line:

Execute MDX queries and set expressions

Running and testing MDX queries is fundamental for TM1 developers, especially in a multiple hierarchy world. Arc v1.1 includes a dedicated plugin to run MDX queries on cubes and dimensions:

Run REST API queries

For those working on TM1 REST API applications or building new Arc plugins, running and testing TM1 REST API queries is important, that is why a new plugin dedicated to the TM1 REST API has been added:

Other enhancements:

  • Add a + button to create new object in (Dimensions, Cubes, Processes, Chores). #192

  • Subset Editor window size depends now on the screen size. #181

  • In Prolog tab, the search capability has been added to data source drop downs. #165

  • Add Keyboard Shortcut to run a TI. #145

  • Add support for Dynamic SQL Queries during Preview. #128

  • Overwrite a cube if already existing. #123

  • Hierarchy editor, expand button only expand first children. #105

  • Add refresh available dimensions button to create cube dialog. #5

Follow us on GitHub

We want to involve the TM1 comunity as much as possible in Arc developments so we decided to show publicly all our tickets on GitHub. On the arc-issue repository, you can see all the opened tickets and what's coming in the future. If you have any requests, feel free to create an issue on the arc-issue repository.

Try it now!

Arc comes with a three-month trial period and there is no installation required. Just download Arc and double click on arc.exe file, you will be coding in no-time:

Happy coding!

Mastering hierarchies in IBM TM1 and Planning Analytics

With IBM TM1 10.2.2 end of support coming in September 2019 now is an ideal time to consider upgrading to IBM Planning Analytics.

Before upgrading, an important consideration is whether or not to incorporate Planning Analytics’ new Hierarchy feature into your applications. There are many aspects to this decision, so the objective of this article is to give you enough information to be able to answer this question:

Hierarchies vs Roll-ups

Let’s begin by answering this question: “What is a Hierarchy?” 

Until the arrival of Planning Analytics, what was commonly called a “hierarchy” in TM1 was simply a specific roll-up of C and N level elements in a dimension. For example, in a “Period” dimension, the same N-level elements could roll up to multiple C-level elements, with the roll-ups having names like “Full Year” and “Jun YTD”:

Cube Architecture with Hierarchies

TM1’s basic cube structure has not changed since TM1’s invention in 1984: a cube is made of two or more dimensions, a cell’s value is attached to the N-level elements in those dimensions, and each dimension can have multiple consolidation paths that roll up N and C elements. In other words, there was a direct relationship between a dimension and its elements.

Planning Analytics has introduced “real” hierarchies to TM1. Instead of a straight path from a dimension to its elements, there is now the option of inserting an intermediate level. This container object is called a “Hierarchy”, and a dimension can have as many hierarchies as you wish.

By default, the new Hierarchy feature is not turned on – it must be enabled by adding the line EnableNewHierarchyCreation=T in the tm1s.cfg file. When you turn on Hierarchies:

  • An extra level can be added between dimensions and elements in TM1’s object model

  • Dimensions are no longer a container of elements, they are a container of Hierarchies.

  • A default Hierarchy is created, which has the exact same name as the dimension itself (this is to maintain backward compatibility)

  • Each dimension can now have multiple Hierarchies, with each Hierarchy containing its own set of consolidations and can include one or more of the leaf elements that are shared across Hierarchies.

The “before” and “after” of TM1’s object model looks like this:

It might look a bit more complex, but Hierarchies provide greater design flexibility and substantial performance benefits.

Greater flexibility

Hierarchies behave like “virtual dimensions”, enabling you to overcome one of the legacy limitations of TM1 – the need to rebuild cubes to accommodate new analysis dimensions.

Provided the analysis Hierarchy, your application becomes more flexible and agile to accommodate evolving business requirements. Adding a Hierarchy does not require recreating a cube, nor the modification of existing load processes or reports.

Greater performance

A major performance benefit of Hierarchies is that you can reduce the number of dimensions in an analysis cube, and fewer dimensions increases query performance. For every dimension that is removed, at minimum one level is removed from the cube index (the actual number of levels removed depends on the number of N level elements in the dimension). The net impact of a smaller index is that a query will require fewer “passes” to retrieve a cell value, resulting in greater query performance.

However, using Hierarchies in lieu of dimensions will not necessarily reduce memory consumption, as data structures still need to be created whether the alternate rollups exist within a single dimension or in separate hierarchies.

Hierarchies with Dimensions

Dimensions will still exist but they are not used in the cube, they are only used for establishing the dimension order.

After creating your first Hierarchy, the “Leaves” Hierarchy will be automatically added to the list of available Hierarchies. In the example below, the “Type” Hierarchy has been added to the Department dimension, and three Hierarchies (“Department”, “Leaves” and “Type”) are now available:

Hierarchies with Leaf Elements

Each Hierarchy can have its own leaf elements; when you delete a leaf element from a Hierarchy, the element is not deleted from other hierarchies or the Leaves Hierarchy. What this means is that data is still stored for this element in the underlying cubes.
If you want to delete a leaf element and delete the data in the cube (i.e. TM1’s traditional behaviour), you must delete it from the Leaves Hierarchy. This will delete the element from all hierarchies, and remove all cube data referenced by the leaf element.

Hierarchies with Processes

To work with Hierarchies in TurboIntegrator (TI), Planning Analytics has introduced a set of new Turbo Integrator FunctionsIBM has ensured these new functions are very similar to the TI functions for dimensions, and there is usually a 1-1 corresponding function. For example, you would use HierarchyExists (check if a Hierarchy exists) instead of DimensionExists (check if a dimension exists).

In the example below, the TI code on the left creates a dimension and on the right it creates a Hierarchy – you can see they are very similar:

Hierarchies with Rules

Working with Hierarchies will make your cube rules a bit more complex, because to reference an element in a Hierarchy, you must use the new “Hierarchy-qualified” syntax, as follows:

  • DimensionName:HierarchyName:ElementName

Note: You would need the DimensionName only if the ElementName is ambiguous.

If you omit the Hierarchy name, the “default” Hierarchy is used, which has the same name as the dimension.

In the example below, you can see two DB calls, the first one without Hierarchy and the second one referencing departments only in the “Type” Hierarchy:

Hierarchies with Attributes

In Planning Analytics, Hierarchies are stored as separate .dim files within the dimension’s “ }hiers” folder in the TM1 Server’s data directory.

Although you can store attribute values for the same named element on different hierarchies, everything is stored in a single }ElementAttributes_ cube. This makes sense since that’s exactly how it works for all the data cubes as well.

Attr functions still work

In processes and rules, "Attr" functions such as AttrS & AttrPutS functions still work for hierarchies, you just need to use DimensionName:HierarchyName for the dimension name instead of simply the dimension name.

For each "Attr" functions, IBM has introduced new functions to cater for hierarchies starting with "ElementAttr", for instance:

  • Attrs -> ElementAttrs

  • AttrPuts -> ElementAttrPuts

  • AttrInsert -> ElementAttrInsert

These new functions behave the same as the old ones with some slightly changes as you can see below:

AttrInsert vs ElementAttrInsert

To create a new attribute on a Hierarchy only, you should use the new function ElementAttrInsert instead of AttrInsert so the syntax will look like this:

  • ElementAttrInsert(cDimSrc, cHierarchy, '', cAlias, 'A');

Instead of 

  • AttrInsert(cDimSrc | ':' | cHierarchy, '', cAlias, 'A');

The ElementAttrInsert will avoid duplicate alias on one consolidation which appears in two hierarchies of the same dimension.

Hierarchies with Subsets

With hierarchies, Subsets are not attached to a dimension but to a Hierarchy, if you are using IBM’s PAX, or Cubewise Arc, you will see the Subsets attached to Hierarchies.

If you do not specify a Hierarchy when creating subset, it will be added to all existing Hierarchies in the dimension.

Hierarchies with Picklists

The syntax of Picklists is also impacted by Hierarchies.

As you might expect by now, the syntax for subsets must also be “hierarchy-qualified”, so instead of SUBSET:Dimname:Subname you must specify SUBSET:Dimname\:Hierarchy:Subname

i.e. replace subset:Entity:FE – Division with subset:Entity\:Entity:FE – Division

Hierarchies with MDX

To reference an element in a Hierarchy in MDX, you must specify the Hierarchy name in your query, for example, instead of [Time].[2018] you will need to use [Time].[Time].[2018] to specify element “2018” from the default Hierarchy (remember, the default Hierarchy has the exact same name as its containing dimension).

Named levels and default members

One of the advantages of using the cube viewer in PAx or Arc versus Perspectives is that you will not need to specify a selection for all dimensions to retrieve cube values. In fact, selecting even a single dimension in a cube view will result in some cube values being retrieved.

This is because for all the other dimensions not referenced in the cube view, TM1 will use the dimension’s default member. To define default members, you will need to specify the defaultMember value in the }HierarchyProperties cube:

If the defaultMember doesn’t exist in the }HierarchyProperties cube the first element will be used (via index order).

It should be noted that in this cube view, hierarchies are treated as “virtual” dimensions (Time, Time:Fiscal Year, Time Half Year…). If you use levels in PAx or Cognos Analytics, this is where you will define them. To apply the changes, you must run the RefreshMdxHierarchy process function.

Hierarchies with Reporting

Currently with Planning Analytics v2.0.5, the main weakness of Hierarchy is on the reporting side. Unfortunately, the DBR function in Active Forms does not support hierarchies. If you are a heavy user of Active Form, it might be challenging to reproduce your Active Forms using hierarchies.

Alternatives to PAX Active Forms are:

  • Cognos Analytics – this is a good option for read-only reporting, as Cognos Analytics fully supports MDX and the new Hierarchy structure.

  • If you need both read and write capability (i.e. planning + reporting), Cubewise Canvas is a great solution, as the DBR in Canvas supports hierarchies.

Should I implement hierarchies?

As you might have guessed, the answer is “it depends”.

By using Hierarchies, you will gain in performance and flexibility, but you possibly lose some Excel reporting capabilities if you are a heavy user of Active Forms.

If you think you can reproduce your Active Forms with PAX’s Exploration Mode without using DBRs, or you do not need Excel-based reports at all, then it is a definite “yes” to implement Hierarchies.

Re-architecting your applications to take advantage of the new Hierarchy capability will require some time and effort, but it is an investment that will pay off in the long, as your solutions will become nearly “future-proof” in accommodating evolving business requirements.


Canvas 3 Released

canvas 3.png

Canvas 2 was already a big step forward by making Canvas way faster and bringing a lot of new features such as reports bursting. Thanks to feedback from all our customers, this new version brings Canvas much further by adding two desired components - a whole new Cube Viewer and Subset Editor.

Cube Viewer

The new Canvas Cube Viewer will offer greater flexibility to your users and fasten developments. It fully supports all IBM Planning Analytics features such as hierarchies and sandbox. What is more is that you are free to customize the look, the way you like.

Subset Editor

Canvas already had lots of different ways to facilitate user selections with dropdown, radio-buttons, date picker... now with the Subset Editor, your users will be able to use all common TM1 and Planning Analytics filter features such as filter by level, attributes, string. This Subset Editor is also highly customisable.


Canvas now supports sandbox. Creating, Publishing, Discarding and Deleting sandboxes is now very easy to do in your Canvas application. The DBRs support sandboxes as well so you can now compare different sandboxes by just referencing two different sandboxes to two different DBRs.

New samples

A whole new sample application has been added. This new application is a great way to learn how you can customize all Canvas look and feel following the standard practices.

Introducing Apliqo UX

If you are looking for Out-of-the-box solution,  check out Apliqo UX. The Apliqo team has built an App-Builder on top of Canvas with lots of ready-to-use components. With Apliqo UX you get the power and freedom of Canvas combined with the ease of drag-and-dropping.


Optimizing your TM1 and Planning Analytics server for Performance

The combination of in-memory calculation, amazing design and years of optimization have resulted in IBM TM1 and Planning Analytics being renowned as one of the fastest real-time analytical engines on the market. The default settings you get "out of the box" is all that is required to have a fast TM1 model. This article focuses on taking it a step further using the many parameters (over 100) which allow you to tune your system to get maximum performance of your TM1/Planning Analytics server. 



Multi-threaded querying allows you to enable multiple cores to conduct queries. This feature will provide you with significant performance improvements, especially for large queries with lot of consolidations. An optimal number of cores (sweat spot) needs to be established to achieve the maximum performance. Ensure you conduct various tests to find the “sweet spot” and maximize the performance. Be cautious to not exceed your licensing arrangements. Basically, ensure you have enough PVU licenses:



MTFeeders is a new parameter from Planning Analytics (TM1 server v11). By turning on this new parameter in tm1s.cfg, MTQ will be then triggered when recalculating feeders:

  • CubeProcessFeeders() is triggered from a TM1 process.

  • A feeder statement is updated in the rules.

  • Construction of feeders at startup.

MTFeeders will provide you significant improvement but you need to be aware that is does not support conditional feeders. If you are using conditional feeders where the condition clause contains a fed value, you have to turn it off.

To turn on MTFeeders during server start-up you will need to add MTFeeders.AtStartup=T.


ParallelInteraction (TM1 only)

This feature is turned on but default in Planning Analytics (TM1 11+), you need to set it to true only if you are still using TM1 10.2.

Parallel interaction allows for greater concurrency of read and write operations on the same cube objects. It can be crucial for optimizing lengthy data load processes. Instead of loading all data sequentially, you could load all months at the same time which is called Parallel loading.  Parallel loading will allow you to segment your data and subsequently leverage multiple cores to simultaneously load the data into cubes.

To manage the threads and to keep the number of threads under the number of cores, we recommend you to use a free utility, Hustle.



This parameter impacts only the start-up time of your PA/TM1 instance. It specifies whether the cube and feeder calculation phases of the server loading are multi-threaded, so multiple processes can be used in parallel. You will need to specify the number of cores that you would like to dedicate to cube loading and feeder processing.

Particularly useful if you have many large cubes and there is an imperative to improve performance upon server start-up. It is recommended that you specify the maximum amount of cores – 1.

Similar as for MTQ, to find the optimal number of cores which will provide optimal performance you will need to test multiple scenarios.

However if you are using Planning Analytics, you should use the new parameter MTCubeLoad instead. More information on this in this article:



Persistent Feeders allows you to improve the loading of cubes with feeders, which will also improve the server start-up time. When you active persistent feeders, it will create a .feeders file for each cube that has rules. Upon server startup the tm1 server will reference the .feeders files and will re-load the feeders for cubes.

It is best practice to activate persistent feeders if you have large cubes which have an extensive number of fed cells.

In many cases start-up time can be significantly reduced, examples of a 80-90% reduction are common.

Things to look out for

  • Feeders are saved to the .feeders file. Therefore, even if you remove a particular feeder from the rule file it will remain in the .feeders file. You will need to delete the .feeders file and allow TM1 to re-generate the file.

  • If you have dynamic rules or consolidated elements on the right-hand side, you will need to use the function reprocess feeders if you choose to add a new version for instance.

  • Although this is a greater feature, judgement is required on when to use it. For instance, if your cubes are small and don’t have much rules/feeders it may be more beneficial to leave this off.


Other parameters which will improve user experience

  • AllRuleCalcStargateOptimization can improve performance in calculating views that contain only rule-calculated consolidations.

  • UseStargateForRules By default when retrieving a calculated cell, the value will be retrieved from a stargate view stored in memory, in some unique instances using a stargate view can be slower than requesting the value from the server, so you can turn off Stargate views for rules by using UseStargateForRules=F.

  • ViewConsolidationOptimization enables or disables view consolidation optimization. It increases the performance but increases the amount of memory required for a given view.

  • CalculationThresholdForStorage: The number of cells needed as minimum before stargate view creation is triggered. Set to a low number to maximize caching but increase memory.

  • MaximumViewSize: if the view memory when accessing this view reaches the threshold it will abort view construction rather than client waiting forever the view.

  • CheckFeedersMaximumCells: if a user tries to check feeders in the cube viewer from cell that is too many cells in the consolidation it will refuse, rather than a very long client hang or eventual crash.

  • MaximumUserSandboxSize: Stop server using excessive memory in case users try to do very large sandbox changes

  • LogReleaseLineCount: If admins are doing transaction log queries this stops users getting locked for a long time.

  • StartupChores: A stargate view is created the first time a user opens the view. If a second a user open the same view, it will be faster because the view would have already been cached. To avoid the first user to wait a bit more, you could set up a chore which ran when the server starts to cache views then this could give users better performance.

  • SubsetElementBreatherCount: Allows lock on subset to be released when there are other requests pending

  • UseLocalCopiesForPublicDynamicSubsets: Improve performance by not invalidating and causing write lock of the public dynamic subset just the user’s local copy.

  • JobQueuing: Turns on queuing for Personal Workspace or Sandbox submissions.

  • JobQueueThreadPoolSize: Job queue is specific for using contributor/application web which uses sandbox by default. It manages all user sandbox commits into a job queue so users don’t wait.

It is important to be aware that all the parameters in the tm1s.cfg file are now dynamic in IBM Planning Analytics, meaning they can be changed with immediate effect.


Mastering MTQ with TM1 and Planning Analytics

Multi-Threaded Queries (MTQ) allow IBM TM1 and Planning Analytics to automatically load balance a single query across multiple CPU cores. In other words, TM1 is fast, MTQ makes it even faster. It has been around for a number of years but there still some frequently asked questions which need a clear answer.


What is the recommended value for the MTQ settting?

IBM's recommendation is that the MTQ value should be the maximum number of available processor cores. In our experience, it is better to NOT give ALL processor cores to MTQ but leave one or two cores to the server to make sure it has enough room to operate (MTQ=-2 or MTQ=-3 that way the Operating System and other server activities can have some processing time). MTQ=-n means that TM1 will use Total number of available cores - n + 1. For more information about configuring MTQ, you should read this IBM technote:

The MTQ setting is per TM1 instance. If you are running more than one TM1 instance on the same server then the sum of all MTQ value for each instance should not be greater than the number of available cores on the server. For example, if you have two TM1 instances on the same 32 CPU server, one MTQ value could be 20 and the other one 10, the sum of the MTQ should be less than 30.


How to find the optimal setting?

Increasing the number of cores on your server to increase the MTQ value will not necessarily make your TM1 application faster. MTQ works by splitting a query into multiple chunks and then combining the results from each chunk into a final result.

There is overhead involved in the process of splitting the query and combining the result. Depending on the size of the data there will be a point where the splitting takes more time than if it larger chunks were used.

Overall, somewhere between 32 and 64 cores, the query hits this tipping point. If your TM1 and Planning Analytics application contains only small cubes, the optimal number of cores could be even less and for applications which have bigger cubes it may be more than 64.

Source: IBM

The only way you can really know the optimal setup is by testing.


When is MTQ triggered?

MTQ does not trigger all the time. It will only trigger in one of the following scenario:

  • Queries exceeding 10,000 cell visits.
  • Creation of TM1 ‘stargate’ vew.
  • Views containing the results of rule derived calculations.

To enable/disable MTQ processing when calculating a view to be used as a TM1 process data source, you need to use MTQQuery parameter in tm1s.cfg. You can choose to overwrite the MTQQuery for a specific process by using one of the following function EnableMTQViewConstruct() or DisableMTQViewConstruct().


Does MTQ manage the order of queries?

As per IBM, “MTQ value does not specify the total # of CPUs that TM1 can leverage for multiple queries. MTQ defines the # of CPU cores that TM1 may leverage for individual queries”.

In other words, MTQ will be able to split one query into multiple chunks, but it does not manage the sequence of queries. If your queries are queuing, it has nothing to do with MTQ.


Differences between MTQ and Parallel Processing

Parallel processing and MTQ are two separate things, the Q in MTQ is for query. So MTQ is only used when you are reading data from the cube, if you are running a straight load, writing data into the cube,  MTQ doesn’t have an influence.

Running a TM1 process might not always trigger MTQ. For example, if a process is using an exsiting view, depending on the view MTQ might not be triggered. The process would need to construct the view to trigger MTQ.


The fine-tuning MTQ parameters

To fine tune how MTQ will behave, there are other tm1s.cfg parameters that you can use. The first four below are documented by IBM and the last four are undocumented:


  • MTQ.CTreeRedundancyReducer
  • MTQ.EnableParallelFacetQuery
  • MTQ.OperationProgressCheckSkipLoopSize
  • MTQ.SingleCellConsolidation

Undocumented settings to be used under direction from IBM

  • MTQ.ForceParallelTxnOnMainWorkUnit
  • MTQ.ImmediateCheckForSplit
  • MTQ.UseThreadPrivateCacheCopyForOperationThreads
  • MTQ.MultithreadStargateCreationUsesMerge
  • MTQ.CTreeWorkUnitMerge
  • MTQ.OperationThreadWakeUpTime

Turn on MTQ for feeders (MTFeeders)

By default, MTQ does not trigger when feeders are processed. To enable MTQ for feeders, IBM introduced in Planning Analytics a new tm1s.cfg parameter MTFeeders. By turning on MTFeeders, MTQ will be triggered when:

  • CubeProcessFeeders() is triggered from a TM1 process.
  • A feeder statement is updated in the rules.
  • Construction of feeders at startup.

MTFeeders will provide you significant improvement but you need to be aware that is does not support conditional feeders. If you are using conditional feeders where the condition clause contains a fed value, you have to turn it off.

To turn on MTFeeders during server start-up you will need to add MTFeeders.AtStartup=T.


Data Science with TM1 and Planning Analytics

Having accurate data in your TM1 and Planning Analytics application is just one part of the job, the second part which is even more important is to understand your data. This is where Data Science can help. Data Science will help you to improve how you make decisions by better understanding the past and predicting the future.

Combine the best of two worlds

On one side, IBM TM1 and Planning Analytics has been very successful over the years mainly for its strong planning and reporting capabilities and on the other side Python is becoming more and more popular thanks to its unique Data Science eco-system. Now with the free Python package TM1py, you can combine the best of these two worlds.

Open the Python community to TM1/Planning Analytics

TM1py makes it easy to do Data Science such as statistics or time series forecasting with your TM1 and Planning Analytics application by opening the Python community to IBM TM1 and Planning Analytics.

A whole new world of free tools to boost your IBM TM1 and Planning Analytics application.

The Python community is very creative in terms of Data Science, there are lots of free tools for data exploration such as Pandas and Ploty or for timeseries forecasting such as Facebook Prophet. All these packages are free and ready-to-use!

For example, with a few lines of code you can use the Ploty package to build interactive charts:

Do the same things but smarter!

Free-up your time by automating repeated tasks, TM1py will enable you to do things smarter such as uploading daily exchange rates from a webservice or automating your daily forecast by using Facebook Prophet:

A Step-By-Step Guide To Data Science with TM1/Planning Analytics

To see data science with TM1 and Planning Analytics in action, we created a series of three articles which will guide you step by step through your first data science experience. In Part 1, you will load weather data from a web service into your TM1 cube, in Part 2, you will then explore your data using Pandas and Ploty and finally in Part 3, you will use time series forecasting using Facebook Prophet:

Remove Undo and Redo button on TM1Web

In TM1 Web or in a Cube view from Architect/Perspectives, after inputing a value, a user can undo or redo his/her input.

It is great to be able to redo a change of data but on the other hand, a user clicking the redo or undo button can cause the TM1 server to hang for several minutes.

That is why many people asked if it is possible to remove the Undo and Redo functionality. Even though there is no official settings which can turn on or off this feature, a workaround exists (only for TM1Web).

The workaround consists of a manual change to the TM1 Web css file in order to remove the undo and redo icons. This workaround is however not supported by IBM. It is a hack, and therefore after any software upgrade, you will need to do this again.

This trick has been tested on a Websheet, Cube Viewer, TM1Web URL API and all working as expected. The only problem will be that it require the users to clear their browser cache if they have previously opened TM1Web before.

This workaround works with TM1 10.2.2 and Planning Analytics, however as you can see below the steps are slightly different.

Remove Undo and Redo button with TM1 10.2.2 FPx

In TM1 Web 10.2.2, you can find the Undo and Redo icon just after the paste icon:

To remove these icons, follow these steps:

1. Go to the following directory {tm1 directory}\webapps\tm1web\css\

2. Open the file standaloner.css as administrator

3. Look for .undoIcon and replace the following code

undoIcon {background-image: url(../images/toolbar/action_undo.gif); width: 16px; height: 16px;}


undoIcon {display:none;background-image: url(../images/toolbar/action_undo.gif); width: 16px; height: 16px;}

We are adding the display:none css class to hide the Undo icon. In the standaloner.css, the undoIcon appear twice. So you will have to repeat this step a second time.

4. Do the same for the .redoIcon, replace the following code:

.redoIcon {background-image: url(../images/toolbar/action_redo.gif); width: 16px; height: 16px;}


.redoIcon {display:none;background-image: url(../images/toolbar/action_redo.gif); width: 16px; height: 16px;}

5. Save the standaloner.css file.
6. You don't need to restart TM1 Web, just open TM1 Web from any browser, make sure browser cache is cleared. Now you should see that the icons disappeared:

Remove the Undo and Redo button with Planning Analytics

In the TM1 Web version of Planning Analytics, you can find the Undo and Redo icon just after the paste icon:

To remove these icons, follow these steps:

1. Go to the following directory {tm1 directory}\webapps\tm1web\scripts\tm1web\themes\flat

2. Open the file flat.css as adminsitator

3. Look for .tm1webUndoIcon and replace the following code

.tm1webUndoIcon {background-image: url("share/toolbar/images/menu_undo.svg");}


.tm1webUndoIcon {display:none;background-image: url("share/toolbar/images/menu_undo.svg");}

4. Do the same for the .tm1webRedoIcon, replace the following code:

.tm1webRedoIcon {background-image: url("share/toolbar/images/menu_redo.svg");}


.tm1webRedoIcon {display:none;background-image: url("share/toolbar/images/menu_redo.svg");}

5. Save the standaloner.css file
6. You don't need to restart TM1 Web, just open TM1 Web from any browser, make sure browser cache is cleared. Now you should see that the icons disappeared:

Debugging Turbo Integrator processes with Arc

Have you ever spent hours trying to understand why your TM1/Planning Analytics process is not working and ended up exporting all your variables into a flat file?

What is debugging?

Debugging code allows you to step through line-by-line and see what the values of the variables are as you go. No need to export your variables into a flat file, now you can step into your code and see in real time the value of your variables.

How does it work?

The first thing you have to do is set up a breakpoint. A breakpoint allows you to stop the execution of code at a particular point. Once the process stops, you can then see the current variable value and or step in and analyze the execution of your code.

Why debugging with Arc?

Arc is an integrated development environment (IDE) for TM1 and Planning Analytics. With Arc you can now build, develop, manage and debug your TM1 processes within a simple and easy to use interface.

See it in action:

For more information about debugging a TM1 process with Arc, you should check the following Help article which digs deeper into this feature:

Download Arc Now

A beta version of Arc is currently availble for download here. It comes with a 6 month trial license so you have no excuses to not try it!



Resolving Circular Reference Calculation

Have you already been stuck with a circular reference in TM1/Planning Analytics?

One of the main reasons why TM1/Planning Analytics has been so successful over the years is its calculation engine. TM1 can resolves very complex calculations over millions of cells in an instant. However, its only weakness is that it will need a little bit of help to resolve an equation with a circular reference. A circular reference is when there is a formula in a cell that directly or indirectly refers to its own cell:


In the equation above, to calculate the Dealer Margin value, the equation needs the value of the Dealer Margin:

Even though the equation is correct, you will have to resolve this circular reference first before being able to calculate the Dealer Margin value in TM1.

This article will explain you how to resolve this circular reference by using a bit of mathematics:

Circular Reference in TM1/Planning Analytics

TM1 does not do circular references, the following formulae in a TM1 rule will result with #N/A value:

['Dealer Margin'] = (['MSRP']-['VAT']-['Consumption Tax']) * ['DM%'];
['Consumption Tax'] = ['W/S Price'] * ['Consumption Tax %'];
['W/S Price'] = ['MSRP'] - ['Dealer Margin'] - ['Panda Fund'] ;

#N/A in TM1 could mean either there is a division by zero or there is a circular reference in the equation. To solve the circular reference, we will have to manipulate the equations.

Resolving Circular Reference

Our objective is to transform the circular reference system into a linear system:

Okay so let's do it!

Starting Point

We have a sytem with three equations with a circular reference: 

How to resolve this system

  1. Simplify the system by removing the parentheses so it will be easier to manipulate the equations.
  2. Break the circular reference: We need to gather the 3 equations into one main equation.

Simplify the system

First to simplify the system we need first to get rid of the parentheses in the Dealer Margin equation:

Our new system is now as below:

Break the circular reference

Now we need to choose one equation and add the other two equations into this new main equation. You can choose any of the three equations to be the main one, in this example we choose the W/S Price equation:

To break the circular reference, we are going first to replace the Dealer Margin definition into the W/S Price and then we are going to do the same for Comsumption Tax into the Dealer Margin:

After a few steps, the W/S Price equation will not be linked to the Dealer Margin value.

Step 1: Substitute Dealer Margin to the W/S Price formulae

Step 2: Substitute the Consumption Tax with its definition to W/S Price

It's almost done, now we just need to simplify this main equation.

Step 3: Move to the left side W/S Price * Consumption Tax %*DM%

Step 4: Factor the W/S Price

Step 5:  Divide both sides with (1 - Consumption Tax %*DM%)

That is it! Our equation is now:

If we now update our TM1 rule:

['Dealer Margin'] = (['MSRP']-['VAT']-['Consumption Tax']) * ['DM%'];
['Consumption Tax'] = ['W/S Price'] * ['Consumption Tax %'];
['W/S Price'] = (['MSRP'] - ['MSRP']*['DM%'] + ['VAT']*['DM%'] - ['Panda Fund']) / (1 - ['Consumption Tax %']*['DM%']) ;

Refresh the cube and the #N/A will have disappeared:

Remember this when kids say, “I’ll never use this math stuff in real life”…


The TM1 REST API Collection for Postman

The TM1 REST API is a way of accessing data and almost everything else in TM1. With the TM1 REST API you do things that normal TM1 clients cannot do such as:

  • Get all cubes that share a particular dimension
  • Get all processes that have an ODBC Datasource
  • Get the last 10 message log entries that were referring to a certain process

Tackling the TM1 REST API by yourself can be challenging. To help you to start with, we have gathered all the main TM1 REST API queries in a ready to use Postman Collection. What you just need to do is to download Postman and then follow these steps to set up the TM1 REST API Collection for Postman.

If you are not familiar with Postman, you should read the following article which will explain how to install Postman and how to run your first TM1 REST API Query:

Download the TM1 REST API Collection

A Postman collection lets you group individual requests together. To download the TM1 REST API Collection just click on the following button:

Once downloaded you should see two files:

  • Canvas Sample.postman_environment.json: Contains information about the TM1 instance you want to query.
  • The TM1 REST API.postman_collection.json: Contains all TM1 REST API queries.

Import the TM1 REST API Collection

To import a Collection in Postman, just click the import button in the top left corner and then pick the The TM1 REST API.postman_collection.json file:

Once imported, you can click on the Collections tab where you should be able to see the TM1 REST API folders

Inside these folder, the queries are split into 5 sub-folders:

  • Cubes: Get all cubes, Execute a view...
  • Dimensions: Get all dimensions, create/delete dimensions or elements...
  • Processes: Execute or Update processes...
  • Chores: Execute or Update chores...
  • Administration: Get configureation, sessions, threads...

If you click on the first query Cubes Get, you will see that the URL uses parameters such as {{protocol}} or {{serverName}}:

Instead of hard coding the protocol, server name and httpPortNumber we use variables defined in a Postman Environment. If the variables are red, it means that they are missing in the Environment variables list. In this example there is no Environment setup, on the top right you should see "No Environment":

Create a new Postman Environment

An environment in Postman enables you to save variables that you can then use in the URL. You can choose to create the environment manually or just upload the Canvas Sample.postman_environment file that you can find in the same folder you just downloaded. To import this file, go to Manage Environment:

Click the Import button and then select Canvas Sample.postman_environment.json file:

Once imported you should be able to see the new environment Canvas Sample:

Click on the environment to see all the variables:

if you select now your environment from the dropdown list on the top right, the variables in the URL should turn orange, orange means that Postman has found the variables in the environment selected (if one variable is missing, it is going to be red):

Manage authentication

In the Authorization tab you can notice that it is set to Inherit auth from parent. Instead of defining the credentials for each query, the credentials are stored in one location, the parent folder. To update the credentials, click the edit button of the collection:

Set the proper credential in the Authorization tab. In this example we are using basic TM1 Authentication (mode 1).

More information about how to set up Authorization with CAM Security can be found in this article:

You should now be able to run the query by clicking on the Send button. If you get the following error, it might be due to the SSL:

To disable SSL certificate version, go to File > Settings and turn off the SSL certificate verification:

If you click send, you should now be able to see the list of cubes:

If it still does not work, you should check first your environment variables and then check if the TM1 REST API is enabled for your TM1 instance.

Explore the TM1 REST API Collection

You are all set! You can now run all queries. Do not forget to update the Environment variables to match the information of your TM1 instance and your TM1 objects you want to query such as cube, process and chore.

What to do next?

If you interested of building web based TM1 planning and reporting application, you should have a look at Canvas which is a web development framework.

If you want to integrate systems with your TM1/Planning Analytics application, you should have a look at TM1py which is a Python package that wraps the TM1 REST API in a simple to use library.





Mastering the TM1 REST API with Postman

Do you want to do more with TM1? Since TM1 10.2, IBM introduced the TM1 REST API which enables you now to do pretty much anything you want with your IBM TM1/Planning Analytics application.

In this post you will find everything you need to know to run your first TM1 REST API query and understand how to read the data.

The TM1 REST API for Dummies

What is the TM1 REST API?

The TM1 REST API is a way of accessing data and almost everything else in TM1. Rather than being a proprietary API like old TM1 interfaces, it is now based on web standards making it accessible to a wide range of developers.  

Why use the TM1 REST API?

The TM1 REST API is fast and there are no external web servers or components you need to install. With the TM1 REST API you can do things that the traditional TM1 clients cannot do; such things as give me all cubes that share a specific dimension or execute MDX Queries to get cube data and much more...

TM1 REST API prerequisites

Since its first introduction, IBM is continuously improving it on every new release. We recommend to use the TM1 REST API with TM1 10.2.2 FP5 as minimum.

How to run your first TM1 REST API query?

A TM1 REST API query is a URL which looks like this:

  • https://localhost:8882/api/v1/Dimensions

It has always the same components:

  • protocol://servername:port/api/v1/resource
    • protocol is either http or https depending if SSL is setup.
    • servername server where TM1 instance is located.
    • port: the port is the httpPortNumber parameter value in the tm1s.cfg.
    • resource: The resource you want to retrieve, e.g. Dimensions to retrieve all dimensions or Cubes to retrive all cubes.

Let's have a look at an easy example. To get the list for all dimensions you can use the following URL in your browser:

  • https://localhost:8882/api/v1/Dimensions

Before running this query you should make sure that REST API is enabled on your TM1 instance. The data you will get will be returned in JSON (JavaScript Object Notation) format.

How to read a JSON format?

As you can see above, in the browser the JSON format is not easily readable but do not worry there are lots of tools online which can help you to format it. For example, if you copy the content from the browser and paste it into the JSON viewer, in the viewer tab you will see the data structure:

Instead of doing these steps manually (running the query and then viewing the data into a JSON viewer), you can use a tool which will do this for you and it is called Postman.

Postman makes the TM1 REST API easy!

Postman is a modern HTTP Client with a fancy interface. It makes interaction with TM1 through the TM1 REST API easier compared to doing it through Chrome, CURL or a programming language.

Download Postman

Postman has a free app that you can download here: Once downloaded, just run the exe file. The installation should take less than a minute.

Postman will start and it will ask you to sign-in. You can choose to create an account or to click on "Take me straight to the app". The advantage of signing up is that Postman will save your work in its cloud and you will be able to retrieve your work on another laptop after you have signed in.

Run your first TM1 REST API with Postman

To run a query in Postman just copy the same query we used above and paste it in the text input:

  • https://localhost:8882/api/v1/Dimensions

    In this example the TM1 instance uses the basic TM1 Security with user admin and no password. In the authorization tab, we select Basic Auth and then we input username and password. Click the Send button to run the query:

    Ater clicking the Send button if you get the error as above you might have to turn off SSL. To do that, go to File then Settings and un-check Postman' SSL certification verification:

    Then click the Send button. Once setup, you should be able to see the list of dimensions in the Body section:

    If you want to get the list of cubes instead of the list of dimensions you can replace Dimensions with Cubes:

    • https://localhost:8882/api/v1/Cubes

    Run TM1 REST API query with CAM Security

    To run a TM1 REST API query on a TM1 instance using CAM Security you will have to change the Authorization to No Auth.

    First what you need to do is to encrypt your CAM user password. To do that you can use sites such as the website. Click on the Encode tab and type user:password:AD and then click Encode, it is going to encode the string:

    If you are not sure about the AD, you can log in to Architect and check the user. In this example, after logging in to architect, the user is AD\user.

    In Postman, you will then need to set the Authorization type to No Auth:

    In the headers tab, add a new key with Authorization and as value CAMNamespace + the Base64 format encoded authentication

    The TM1 REST API Collection for Postman

    That is it! You are now ready to dig into the TM1 REST API. To help you to start with, we have gathered all the most important TM1 REST API queries into a Postman collection, you can download it here:

    What to do next?

    If you interested of building web-based TM1 planning and reporting application, you should have a look at Canvas which is a web development framework.

    If you want to integrate systems with your TM1/Planning Analytics application, you should have a look at TM1py which is a Python package that wraps the TM1 REST API in a simple-to-use library.





    How Cubewise Code will shape the future of IBM TM1/Planning Analytics in 2018

    2018 v2.png

    A lot happened in the TM1/Planning Analytics world in 2017. Canvas has been endorsed by many customers as their new way to build modern web planning and reporting applications. TM1py brought together for the first time, the TM1 and Python communities in order to find new ways to integrate external systems with your TM1 application. 

    In 2018, we continue mastering the TM1 REST API by introducing a brand new product:

    Something that all TM1 developers have been asking for! A new way to build your TM1 models which takes advantage of all the new features of TM1 11 / Planning Analytics. More information to come on this blog or you can contact your local Cubewise office.

    Pulse goes Big Data with Kibana and Elasticsearch

    Buiding reports and analysing the core of your TM1 application such as your TM1 user sessions, TM1 process errors, TM1 process/chore runtime... will become even easier now with Kibana and Elastic Search.

    Kibana is probably the most exciting new feature of Pulse since migration was added. With Pulse v5.7, Pulse can now send data to Elasticsearch, one of the best and most popular Big Data stores. Kibana provides dashboarding / reporting on top of the Pulse data stored in Elasticsearch enabling you to develop your own metrics and share them with your TM1 community.

    Canvas Cube Viewer and Subset Editor

    Canvas will continue revolutioning the way TM1 planning and dashboarding applications are built. Canvas has proven in 2017 that it is a mature, scalable and stable solution for many customers. In 2018, we will make Canvas even better by introducing a lot of new exciting features such as the brand new Cube Viewer, Subset Editor and new samples.

    A new version of Bedrock and TM1py

    In 2018, there will be a new version of Bedrock (v4) which will be designed for IBM Planning Analytics. This will support hierarchies and all the new functions introduced with Planning Analytics Local. We will continue improving TM1py as well, with a lot of new features and articles to inform the TM1 Community about what you can do with Python.

    Exciting IBM TM1/Planning Analytics conferences coming close to you

    This year TM1 and Planning Analytics conferences will be held in four locations:

    • London in April
    • Sydney and Melbourne in August
    • Las Vegas in September

    We will also be at Think conference in March so if you are in Las Vegas drop by and say hello.

    Read more:



    How Cubewise CODE has revolutionized TM1 Planning Analytics in 2017

    A lot happened in 2017 in the IBM Planning Analytics (TM1) world. Here is a quick recap from Cubewise CODE.

    Pulse: Centralized Database Architecture

    Pulse take up continues to grow beyond 150 customers globally allowing our customers to watch over a combined total of well over 50 million lines of TM1 code. Accordingly, the number of “large enterprise” TM1 implementations has increased necessitating support for MS SQL Server. The latest version,  Pulse v5.6,  incorporates a centralized database architecture which brings downstream benefits such as forensic TM1 server and user reporting via Big Data provider, Elasticsearch. This version is a big step forward for Pulse in the cloud.

    Canvas v2 Released

    Canvas v2 was a major milestone in the maturity, scalability and stability of the product. This new version introduced lots of new features such as:

    A new way to integrate systems with your IBM TM1 Planning Analytics application

    TM1py is a free Python package that wraps the TM1 REST API in a simple to use library. Making it easier to integrate systems more effectively with IBM Planning Analytics.

    Speed up your IBM Planning Analytics development with our free products

    Bedrock, TM1Kill, Hustle and many more other free products will save you lots of time as a TM1 developer:

    Cubewise EDU Conferences and Training

    In 2017, we hosted more than 500 paying delegates at our IBM Planning Analytics conferences in Sydney, London, NYC and Los Angeles. To learn more about our products, training is now available from the Cubewise EDU page.


    Looking forward to 2018

    2017 was a great year for the TM1 community and Cubewise Code. We are looking forward to 2018 and making IBM Planning Analytics even better!


    How to find over feeding in your TM1 model

    Feeders are a crucial part of IBM TM1/Planning Analytics, giving us ad-hoc rule calculations without loss of performance or requiring pre-calculations of results. However, getting feeders right takes a good understanding of the model and how values influence a calculation.

    How to check if a value is fed?

    The first hurdle when working with feeders, is making sure that every calculation that results in a value is fed. This is so important, as in TM1 only fed cells roll up in a consolidation. If a system is underfed, it means that you most likely have missing values when looking at aggregations. Luckily, TM1 has a tool to check for missing feeders, which can be accessed by right clicking on a cell and select “Check feeders”. Any cell not being fed will show up and you can then work on fixing it.

    How to find overfed cells?

    However, the opposite result is overfeeding a system. In this case, rule based cells that result in a zero value are flagged with a feeder. While a small amount of overfeeding might not have an impact, large cubes with a lot of overfed cells will result in much slower end user performance, as the consolidation engine has to check the cells only to find out that the result is zero and does not have an impact on the overall consolidation.

    In order to assess, how many cells are overfed and which feeder statement is the likely candidate, you can apply the following simple trick.

    For the cube you want to analyse, create a shadow cube with the same dimensionality. In our example, we work with the standard demo model from IBM and analyse the SalesCube. The shadow cube we have created is called SalesCube - Overfeeding.

    For the SalesCube - Overfeeding, create a new rule file and add three statements to it.

    [] = N: IF(
            DB('SalesCube',!actvsbud,!region,!model,!account1,!month) = 0

    The last step in our preparation is to add one additional feeder to the initial SalesCube pointing to our SalesCube - Overfeeding.

    [] => DB('SalesCube - Overfeeding',!actvsbud,!region,!model,!account1,!month);

    Once this is completed you can open the SalesCube - Overfeeding and browse the data:

    Any cell showing up with a 1 is overfed and a candidate to be fixed.

    The idea behind this trick is to check if a cell in SalesCube - Overfeeding is fed even though the value in the same intersection in SalesCube equals 0. 
    If the value is fed in SalesCube - Overfeeding, the value at a consolidation level will be equal to 1. In this case a cell which has the value equals to 0 in SalesCube sends a feeder flag to a cell in SalesCube - Overfeeding cube, meaning that the cell in SalesCube is fed even though the value equals 0.

    How to fix it?

    To understand why this cell is overfed we need to drill down to the lowest level:

    We can see that Gross Margin % has a 1 in the SalesCube - Overfeeding even though the Gross Margin% is equal to 0 in SalesCube. If we have a look at the rule:

    ['Gross Margin%'] = ['Gross Margin'] \ ['Sales'] * 100;

    Instead of being fed by Sales or Gross Margin, Gross Margin% is fed by Units:

    ['Units'] => ['Gross Margin%'];

    In this scenario Gross Margin % is overfed because it is fed by Units. Even though Gross Margin % equals 0, it is still fed because Units equals 10 in Feb.

    Use feeder-less rules

    Feeders can take lots of memory if you work with large cubes. In this scenario where Gross Margin % is always a calculation for N and C levels, you can get rid of the feeder by adding Gross Margin as a children of Gross Margin %:

    Now Gross Margin % is a consolidation and will be "fed" when Gross Margin has a value without having to write a feeder. Using this method to remove feeders will speed up the TM1 server startup time and reduce the size of the cubes or feeders files.


    Determine the version of IBM Planning Analytics

    It is not easy to know which IBM TM1/Planning Analytics version is installed on your server, even if you know the TM1 server version number it is difficult to tell from the version number if you are using the RTM (first release) or an interim fix or a fix pack. This article shows all versions number from TM1 9.5.2 to help you to find out what is the exact IBM TM1/Planning Analytics version installed on your server.

    How to find the TM1 server version number

    To find out what IBM TM1/Planning Analytics version you are using, you need to check the version of the TM1 server. There are two ways to find the TM1 server version number:

    1. Open cmplst.txt file

    Depending on the TM1 version, the default location of the cmplst.txt file could be:

    • C:\Program Files\ibm\cognos\tm1: TM1 9.5.2 and lower

    • C:\Program Files\ibm\cognos\tm1_64: TM1 10.1 and higher version

    • C:\Program Files\ibm\cognos\pa_64: PAL 2.0

    Once you have opened the file, look for TM1SERVER_version and you will get the TM1 server version number:

    2. Go to properties of tm1s.exe

    Another way to find the TM1 server version number is to open the properties of the bin64\tm1s.exe file and look for the File version:

    Check the version number with Pulse

    A quicker way to check the TM1 version number is to open the Pulse dashboard, in the server section you will find the TM1 server version which matches the version number in the cmplst.txt:

    Once you know the version number you can check the list below to find out what version, RTM (first release) or interim fix or fix pack is installed in your environment:

    Determine the version of IBM Cognos TM1

    If your TM1 server version number is between 9.5.20000.x and  10.2.20700.x, you should check this IBM article which lists all IBM Cognos TM1 versions from 9.5.2 to 10.2.

    Determine the version of IBM Planning Analytics

    If your TM1 server version number starts with 11, it means that you have installed one of the IBM Planning Analytics version below:

     Planning Analytics Local 2.0

    • tm1s.exe = 11.0.00000.918

    • cmplst = 11.0.00000.918

     Planning Analytics Local 2.0.1

    • tm1s.exe =

    • cmplst = 11.0.00100.927-0

     Planning Analytics Local 2.0.1 IF1

    • tm1s.exe =

    • cmplst = 11.0.00101.931

     Planning Analytics Local 2.0.2

    • tm1s.exe =

    • cmplst = 11.0.00200.998

     Planning Analytics Local 2.0.2 IF2

    • tm1s.exe =

    • cmplst = 11.0.00202.1014

     Planning Analytics Local 2.0.2 IF4

    • tm1s.exe =

    • cmplst = 11.0.00204.1030

     Planning Analytics Local 2.0.3

    • tm1s.exe =

    • cmplst = 11.1.00000.30

     Planning Analytics Local 2.0.3 (Version number updated by IBM in Dec 2017)

    • tm1s.exe =

    • cmplst = 11.1.00004.2

     Planning Analytics Local 2.0.4

    • tm1s.exe =

    • cmplst = 11.2.00000.27

     Planning Analytics Local 2.0.5

    • tm1s.exe =

    • cmplst = 11.3.00000.27

    Planning Analytics Local 2.0.5 IF3

    • tm1s.exe =

    • cmplst = 11.3.00003.1

    For more information about all IBM Planning Analytics version you should check this IBM link.


    Getting started with TM1py

    TM1py is a free Python package that wraps the TM1 REST API in a simple to use library. Making it easier to integrate systems more effectively with IBM Planning Analytics.

    Why you should use it?

    • Load FX rates from web services such as FRED into the TM1 Server.
    • Greater automation of your forecast models with Pandas for Data Analysis and Statistics.
    • Advanced integration with machine learning and forecasting algorithms with Python, scikit-learn and the TM1 Server.

    Getting Started!

    TM1py is really quick and easy to setup, just follow these steps and you will be able to run your first Python script in less than 5 min!

    1. Install TM1py

    The first step is to install Python and TM1py. These steps are explained on the TM1py-samples github page.

    2. Enable the TM1 REST API

    TM1py uses the TM1 REST API to connect to the TM1 Server, you will need to enable the TM1 REST API on each instance where you want TM1py to connect to.

    3. Download TM1py samples

    TM1py includes a lot of ready to use samples that you can download on Github.

    4. Run your first Python script

    Ever wondered which of the dimensions in your TM1 instance are not used in cubes? TM1py can help to answer this questions with 8 Lines of code! Just follow these steps to know which dimensions are not used in your TM1 application.

    5. Learn More

    The differences between IBM Planning Analytics Workspace and Canvas

    The world has changed, user experience is now the most decisive factor in the success of any business application that is built for people, and TM1-based applications are no exception. Despite inferior modelling capability and performance, other (non-TM1) software solutions have become popular almost solely on the fact that they meet modern user expectations for user experience.


    In other words, companies are choosing their budgeting and reporting solutions based on look and feel and not based on whether it gets the job done - and done well.

    To address this, at the end of last year IBM released Planning Analytics Workspace (PAW). PAW is a completely new, modern, web-based self-service user interface for TM1.

    With Cubewise Code’s Canvas for TM1 and IBM’s Planning Analytics Workspace, TM1 customers thankfully now have two options to offer users to give them a modern TM1 user experience.

    But there has been some confusion about where customers should best use PAW and where best to use Canvas. Let’s dig into that.


    Canvas and Planning Analytics Workspace are both designed to build web applications natively on TM1 using the TM1 REST API. This means that both are very fast and require no other data layer (no “connector”, no flat file interface, etc.) to access TM1. They both are focused on TM1-specific functionality which means they feature the rich, high-performance write-back capabilities that underpin business planning applications.


    SELF SERVICE vs Curated Apps

    PAW has been designed as a powerful ad-hoc self-service data discovery tool with which business users can easily build and share their own reports and dashboards. The main advantage of such drag-and-drop tools is that any user with minimal to no training can create a report or dashboard quickly. The disadvantage is that the user is inherently limited to whatever the software allows them to drag and drop and corresponding “right click” options. If you want to do something you have been able to do in Excel or have seen in another application and it is not “in the box”, the best you can do is request the new feature from IBM and hope they prioritise it.

    Canvas is not a drag-and-drop tool. Canvas was designed to finally bring the freedom and power of modern web application development to TM1 developers. The core benefit of Canvas is that for the very first time, a TM1 Developer can produce a modern web-based application and match the modern user experience “wow factor” expectations of end users.
    Some people have shied away from Canvas because they do not know HTML. But Canvas has been designed with this in mind – we have seen that any TM1 Developer that can write rules and TI is able to easily learn enough HTML to be able to build rich, modern web applications for TM1 with Canvas. In short, it allows TM1 developers to embed DBR and SUBNM functions (among many other familiar TM1 pieces) into their web applications in a way they are already familiar with in Excel.


    PAW and Canvas are built on fundamentally different concepts in regards to how a report is built

    PAW is a view-based application. It allows a user to easily drag and drop a TM1 cube view onto a workspace and then display it as a range of charts. So whatever a user wants to have in the chart, they must have in the view – which may lead to design decisions regarding what you put in a cube in order to display it in PAW.

    Canvas, on the other hand has adopted a cell-based approach to application building similar to Excel, and TM1 Perspectives.  This means each cell in Canvas has its own DBR formula to retrieve data and to update TM1, which is exactly how TM1 Developers are used to working with TM1. So with Canvas, you can easily combine data from two different cubes into one table or chart as you can in Excel and TM1 Perspectives.


    As it turns out it is not Canvas versus PAW - Canvas and PAW are complementary products. PAW is much better suited for ad-hoc TM1 reporting while Canvas is definitely the product of choice for those who want to build a sophisticated custom TM1 application for planning or reporting.

    Pulse v5.6 Released

    The best TM1 management system just got better thanks to the great feedback we have received from our 150+ customers worldwide.

    MS SQL Server can optionally be used

    To achieve better performance on large Pulse installations and concurrent active Pulse users, we now support MS SQL Server. Importantly this centralized database architecture is a needed and big step in the cloud enablement of Pulse. 

    Pause and rewind up to 10 min of your TM1 history

    With Pulse v5.6, you can now navigate through 10 min of your TM1 history. For example, you receive a Pulse alert that someone is waiting, you can then go back to the beginning of the event to find out what is causing the lock.

    Canvas logging

    With Pulse v5.6, you are now able to see which Canvas pages have been opened and who opened them.

    Pulse self-usage tracking

    Pulse is monitoring itself, you will be able to know who logs in and which features are used:

    Migration Packages can be refreshed/recreated.

    Migration Packages can be refreshed with the latest version of their contained TM1 objects. When you click the recreate button, Pulse is going to create a new migration package based on the same list of TM1 objects but it is going to take the current version from the TM1 data folder.

    A new lighter Excel logger

    Pulse v5.6 introduces a new Excel Add-ins to track Excel. This new logger is written in .NET. This new silent add-in, which logs Excel usage to Pulse and has no user interface.