TM1

The Attack of Best Practices for IBM TM1 and Planning Analytics Hierarchies

There have been considerable improvements to the TM1 server in recent years and one of the hot topics that the new Planning Analytics is bringing to the table is the “true” Hierarchies.

Hierarchies are not enabled by default and there is important information about them that you should consider before enabling/turning them on. We have compiled the many aspects of hierarchies and the way they work with rules, TI processes, subsets, attributes, picklists and MDX.

It is important that you read our detailed blog Mastering hierarchies in IBM TM1 and Planning Analytics which will provide you with a better understanding on this new concept.     

Watch Episode II now to see Hierarchies in action and learn how you can implement Best Practices for this all new functionality of Planning Analytics.

 

Next episode - Wednesday, 8th of May (4pm US ET)

Episode-3.jpg

During this third Episode of the saga, you will discover the amazing power of TM1Web and how easy it is to create and deploy web application for either planning, analysis and/or reporting.

  • TM1 Web Beautification

  • How to build a Menu System

  • Exposing TM1web sheets in PAW

The Excel Hell Menace

In the first Episode of the epic webinar saga - MAY TM1 BE WITH YOU, we explore how you can solve some of the issues that can make a great tool like Excel an absolute hell.

The road to Excel Hell is normally paved with good intentions. Almost every firm use Excel for financial reporting but let’s be honest, we have all been there or seen it happening, when a new Excel report is created, your boss loves it and few “easy” suggestions are added to the report.

The report becomes more complex and hence error prone specially if you take into consideration that 9 in 10 Spreadsheets contain errors. Then many people in the company is using the report and the “users” send updates and tweaks, eventually macros and other linked spreadsheets are incorporated but they don’t always work and not everyone is a VBA expert.

In 2013 JP Morgan had a $6 billion USD loss from a copy and paste error.

Several checks and balance reconciliations are also added to avoid errors. Now the report needs twice as long to be produced and it takes every month a whole team to consolidate, update, check, reconcile and distribute. Sounds familiar to you?

You maybe ahead of the curve by already using TM1 and taking the concept of Excel to a robust database level also known as the Functional Database. Yet, you could benefit even more from learning handy tips and best practices on how to effectively leverage TM1’s power through Planning Analytics for Excel (PAx). By using dynamic reports in conjunction with MDX you can solve some of the issues that can make a great tool like Excel an absolute hell.

Tips and best practices

  • "TM1User" identifies which instance is connected.

  • Excel defined ranges allow for more dynamic formulas.

  • Organize your layout for easier maintainability.

  • Centralized parameter cubes to reduce monthly updates (e.g., date).

  • "SUBNM" for selections reduces the reports required.

  • "Format ranges" with Excel "IF" formulas allow for many visual design options.

  • MDX statements for dynamic rows without creating multiple subsets.

  • Action buttons can enhance user experience by triggering refreshing or TI scripts.

Debugging check list

  • TM1RPTTITLE needs to reference existing elements.

  • No error returned in TM1RPTTITLE formula cell does not indicate mistake-free formula.

  • Only use TM1RPTTITLE for "fixed" dimensions, otherwise could still work but performance may suffer.

  • Ensure MDX statement returns values.

  • DBRW formulas should be right of TM1RPTROW cell and needs to reference element in every dimension.

Watch Episode 1 now:

All the above and much more is explained in details in the recording video.

 

Next episode:

In this exciting second Episode of this webinar saga, you will learn how you can implement Best Practices for the new revolutionary functionality of Planning Analytics known as Hierarchies.

 

IBM TM1 and Planning Analytics Cheatsheet

Since the first release of TM1 in 1981 to the latest versions of IBM Planning Analytics, TM1 has gone through a long journey from a niche to a mature product used all around the world to build Planning and Analytics applications.

Using IBM TM1 and Planning Analytics out-of-the box is all that is required to get a fast model but you can so much more by exploring all features.

The idea of this cheatsheet is to gather the most important information that TM1 Administrators, Developers and Users should be aware off to take the most out of IBM TM1 and Planning Analytics.

To download the PDF version, just click the button below (no contact information required):

Deep dive into certain topics using the information icons which will send you to online articles. This cheatsheet includes three pages, one for Administrators, one for Developers and one for Users.

Cheatsheet for TM1 Administrators

The TM1 and Planning Analytics cheatsheet for administrators focuses on installation, configuration and optimisation of your application which is organised as follows:

  • BLACK: General TM1 knowledge

  • GREEN: Configuration

  • BLUE: General IT knowledge

  • RED: Troubleshooting

  • ORANGE: Learn more

Cheatsheet for TM1 Developers

The TM1 and Planning Analytics cheatsheet for developers focuses on design and development of rules and processes which is organised as follows:

  • BLACK: General Knowledge

  • DARK BLUE: Rules

  • GREEN: Processes

  • BLUE: Advanced knowledge

  • ORANGE: Learn more

Cheatsheet for TM1 Users

The TM1 and Planning Analytics cheatsheet for users focuses on objects definition and the different user interfaces which is organised as follows:

  • BLACK: General knowledge

  • BLUE: Web user interfaces

  • GREEN: Excel user interfaces

  • RED: Troubleshooting

  • ORANGE: Learn more

If you think something should or should not be there, just email us at software@cubewise.com.

Learn more:

Continuous improvement of your TM1 and Planning Analytics system

We are seeing a rapid uptake in our Cubewise CODE capabilities as developers scramble to respond to the rapidly changing business environment that they model and operate in.

Every system can use some improvement. Let us share with you in these videos our global experience with the latest technology and best practices for IBM TM1 and Planning Analytics that will help you identify high-impact improvement opportunities.

We will share valuable and actionable information addressing all aspects of the PA/TM1 development process, including modeling, migration and the beautification of your user interfaces, all with the aim of improving your application’s user experience and ROI.

Your hosts:

 
guido.png

Guido Tejeda

Senior Software Engineer at Cubewise

 

Luis Ruicon

Business Development Manager at Cubewise

 
 

Chapter 1 - Delivering efficiencies in the TM1 and Planning Analytics process

Guido shares some tips to improve your developments and some examples about how to use Bedrock for TM1:

 

Chapter 2 - Optimizing your backend system

In this second chapter, Luis goes through the different tools such as Hustle and Pulse which you can use to improve your backend system:

 

Chapter 3 - Leveraging the TM1 REST API

Everything you should know about the TM1 REST API is explained by Guido in this chapter:

 

Chapter 4 - Modernizing your TM1 applications

In this video, Luis analyzes on a single model, all the different user interfaces available on the market to create a planning and reporting web application for IBM TM1 and Planning Analytics:

 

Why it’s Vital to Monitor the Health of your TM1 System

As the data volumes, computational complexity and user community of your TM1 applications grow over time, effectively monitoring the health of your system becomes vital to business continuity. And by “effective”, we don’t mean just in-the-moment, but over time.

Just like a doctor visit, your current weight blood pressure and other health metrics are important, but they must also be compared to a historical baseline in order to be truly meaningful. Sudden increases or decreases in weight provide much more information than what your weight is Right Now.

With TM1, these trends and comparisons could take the following forms:

  • Is my application as responsive as it was before?

  • Does my server take longer to restart or shutdown than before?

  • Are alert events happening with increasing frequency?

  • Is memory consumption increasing, in which cubes, and at what rate?

  • Are the number and duration user sessions increasing or decreasing over time?

  • Is the user experience better or worse than before?

  • Do processes take longer to run?

  • Are certain times of the day more problematic than others?

New in Cubewise Pulse: the System Summary Report

With the new System Summary Report introduced in the latest version of Cubewise Pulse (5.8), IBM TM1 and Planning Analytics administrators have a powerful tool to provide accurate answers to many of these questions. The System Summary Report gathers all key performance indicators such as user sessions, wait time and alerts in a concise one-pager PDF report.

Analysing the Number of Sessions vs Number of Alerts

One of the biggest system management benefits of Pulse is the ability to setup alert conditions for proactive monitoring of your TM1 applications. Pulse alerts can be defined for multiple scenarios, alert conditions and thresholds, including memory use, free disk space, user run time & wait time, TM1 crashes, error logs, message logs, rollback events and many others.

The first graph in the System Summary report displays the correlation betwen number of sessions vs the number of triggered alerts:

This chart allows you to examine cause-and-effect relationships such as:

  • If the user sessions decrease over time

  • If the number of alerts increase each time the number sessions increase 

Analysing Wait Time

In the second section of the Report, administrators will see the Top 10 waiting time event, their duration, and a bar chart to analyse the maximum wait time by period.

Things to consider in this chart:

  • If the wait time is greater than 60 seconds during working hours

  • If the maximum wait time per period increases

Analysing Alerts

The last chart displays the distribution of alerts by type over time:

Things to monitor:

  • If the volume of alerts are increasing

  • If the distribution of the alert types are changing

Automatic Bursting

To make it easy to end this report to your team at pre-defined intervals, Pulse provides a scheduler. For example, you could distribute this report on a weekly basis that includes the last seven days of data.

More information about the Pulse system summary report can be found in the Help article:

Read more:

Mastering hierarchies in IBM TM1 and Planning Analytics

With IBM TM1 10.2.2 end of support coming in September 2019 now is an ideal time to consider upgrading to IBM Planning Analytics.

Before upgrading, an important consideration is whether or not to incorporate Planning Analytics’ new Hierarchy feature into your applications. There are many aspects to this decision, so the objective of this article is to give you enough information to be able to answer this question:

Hierarchies vs Roll-ups

Let’s begin by answering this question: “What is a Hierarchy?” 

Until the arrival of Planning Analytics, what was commonly called a “hierarchy” in TM1 was simply a specific roll-up of C and N level elements in a dimension. For example, in a “Period” dimension, the same N-level elements could roll up to multiple C-level elements, with the roll-ups having names like “Full Year” and “Jun YTD”:

Cube Architecture with Hierarchies

TM1’s basic cube structure has not changed since TM1’s invention in 1984: a cube is made of two or more dimensions, a cell’s value is attached to the N-level elements in those dimensions, and each dimension can have multiple consolidation paths that roll up N and C elements. In other words, there was a direct relationship between a dimension and its elements.

Planning Analytics has introduced “real” hierarchies to TM1. Instead of a straight path from a dimension to its elements, there is now the option of inserting an intermediate level. This container object is called a “Hierarchy”, and a dimension can have as many hierarchies as you wish.

By default, the new Hierarchy feature is not turned on – it must be enabled by adding the line EnableNewHierarchyCreation=T in the tm1s.cfg file. When you turn on Hierarchies:

  • An extra level can be added between dimensions and elements in TM1’s object model

  • Dimensions are no longer a container of elements, they are a container of Hierarchies.

  • A default Hierarchy is created, which has the exact same name as the dimension itself (this is to maintain backward compatibility)

  • Each dimension can now have multiple Hierarchies, with each Hierarchy containing its own set of consolidations and can include one or more of the leaf elements that are shared across Hierarchies.

The “before” and “after” of TM1’s object model looks like this:

It might look a bit more complex, but Hierarchies provide greater design flexibility and substantial performance benefits.

Greater flexibility

Hierarchies behave like “virtual dimensions”, enabling you to overcome one of the legacy limitations of TM1 – the need to rebuild cubes to accommodate new analysis dimensions.

Provided the analysis Hierarchy, your application becomes more flexible and agile to accommodate evolving business requirements. Adding a Hierarchy does not require recreating a cube, nor the modification of existing load processes or reports.

Greater performance

A major performance benefit of Hierarchies is that you can reduce the number of dimensions in an analysis cube, and fewer dimensions increases query performance. For every dimension that is removed, at minimum one level is removed from the cube index (the actual number of levels removed depends on the number of N level elements in the dimension). The net impact of a smaller index is that a query will require fewer “passes” to retrieve a cell value, resulting in greater query performance.

However, using Hierarchies in lieu of dimensions will not necessarily reduce memory consumption, as data structures still need to be created whether the alternate rollups exist within a single dimension or in separate hierarchies.

Hierarchies with Dimensions

Dimensions will still exist but they are not used in the cube, they are only used for establishing the dimension order.

After creating your first Hierarchy, the “Leaves” Hierarchy will be automatically added to the list of available Hierarchies. In the example below, the “Type” Hierarchy has been added to the Department dimension, and three Hierarchies (“Department”, “Leaves” and “Type”) are now available:

Hierarchies with Leaf Elements

Each Hierarchy can have its own leaf elements; when you delete a leaf element from a Hierarchy, the element is not deleted from other hierarchies or the Leaves Hierarchy. What this means is that data is still stored for this element in the underlying cubes.
If you want to delete a leaf element and delete the data in the cube (i.e. TM1’s traditional behaviour), you must delete it from the Leaves Hierarchy. This will delete the element from all hierarchies, and remove all cube data referenced by the leaf element.

Hierarchies with Processes

To work with Hierarchies in TurboIntegrator (TI), Planning Analytics has introduced a set of new Turbo Integrator FunctionsIBM has ensured these new functions are very similar to the TI functions for dimensions, and there is usually a 1-1 corresponding function. For example, you would use HierarchyExists (check if a Hierarchy exists) instead of DimensionExists (check if a dimension exists).

In the example below, the TI code on the left creates a dimension and on the right it creates a Hierarchy – you can see they are very similar:

Hierarchies with Rules

Working with Hierarchies will make your cube rules a bit more complex, because to reference an element in a Hierarchy, you must use the new “Hierarchy-qualified” syntax, as follows:

  • DimensionName:HierarchyName:ElementName

Note: You would need the DimensionName only if the ElementName is ambiguous.

If you omit the Hierarchy name, the “default” Hierarchy is used, which has the same name as the dimension.

In the example below, you can see two DB calls, the first one without Hierarchy and the second one referencing departments only in the “Type” Hierarchy:

Everything you need to know about working with hierarchies in rules can be found here:

Hierarchies with Attributes

In Planning Analytics, Hierarchies are stored as separate .dim files within the dimension’s “ }hiers” folder in the TM1 Server’s data directory.

Although you can store attribute values for the same named element on different hierarchies, everything is stored in a single }ElementAttributes_ cube. This makes sense since that’s exactly how it works for all the data cubes as well.

Attr functions still work

In processes and rules, "Attr" functions such as AttrS & AttrPutS functions still work for hierarchies, you just need to use DimensionName:HierarchyName for the dimension name instead of simply the dimension name.

For each "Attr" functions, IBM has introduced new functions to cater for hierarchies starting with "ElementAttr", for instance:

  • Attrs -> ElementAttrs

  • AttrPuts -> ElementAttrPuts

  • AttrInsert -> ElementAttrInsert

These new functions behave the same as the old ones with some slightly changes as you can see below:

AttrInsert vs ElementAttrInsert

To create a new attribute on a Hierarchy only, you should use the new function ElementAttrInsert instead of AttrInsert so the syntax will look like this:

  • ElementAttrInsert(cDimSrc, cHierarchy, '', cAlias, 'A');

Instead of 

  • AttrInsert(cDimSrc | ':' | cHierarchy, '', cAlias, 'A');

The ElementAttrInsert will avoid duplicate alias on one consolidation which appears in two hierarchies of the same dimension.

Hierarchies with Subsets

With hierarchies, Subsets are not attached to a dimension but to a Hierarchy, if you are using IBM’s PAX, or Cubewise Arc, you will see the Subsets attached to Hierarchies.

If you do not specify a Hierarchy when creating subset, it will be added to all existing Hierarchies in the dimension.

Hierarchies with Picklists

The syntax of Picklists is also impacted by Hierarchies.

As you might expect by now, the syntax for subsets must also be “hierarchy-qualified”, so instead of SUBSET:Dimname:Subname you must specify SUBSET:Dimname\:Hierarchy:Subname

i.e. replace subset:Entity:FE – Division with subset:Entity\:Entity:FE – Division

Hierarchies with MDX

To reference an element in a Hierarchy in MDX, you must specify the Hierarchy name in your query, for example, instead of [Time].[2018] you will need to use [Time].[Time].[2018] to specify element “2018” from the default Hierarchy (remember, the default Hierarchy has the exact same name as its containing dimension).

Named levels and default members

One of the advantages of using the cube viewer in PAx or Arc versus Perspectives is that you will not need to specify a selection for all dimensions to retrieve cube values. In fact, selecting even a single dimension in a cube view will result in some cube values being retrieved.

This is because for all the other dimensions not referenced in the cube view, TM1 will use the dimension’s default member. To define default members, you will need to specify the defaultMember value in the }HierarchyProperties cube:

If the defaultMember doesn’t exist in the }HierarchyProperties cube the first element will be used (via index order).

It should be noted that in this cube view, hierarchies are treated as “virtual” dimensions (Time, Time:Fiscal Year, Time Half Year…). If you use levels in PAx or Cognos Analytics, this is where you will define them. To apply the changes, you must run the RefreshMdxHierarchy process function.

Hierarchies with Reporting

Currently with Planning Analytics v2.0.5, the main weakness of Hierarchy is on the reporting side. Unfortunately, the DBR function in Active Forms does not support hierarchies. If you are a heavy user of Active Form, it might be challenging to reproduce your Active Forms using hierarchies.

Alternatives to PAX Active Forms are:

  • Cognos Analytics – this is a good option for read-only reporting, as Cognos Analytics fully supports MDX and the new Hierarchy structure.

  • If you need both read and write capability (i.e. planning + reporting), Cubewise Canvas is a great solution, as the DBR in Canvas supports hierarchies.

Should I implement hierarchies?

As you might have guessed, the answer is “it depends”.

By using Hierarchies, you will gain in performance and flexibility, but you possibly lose some Excel reporting capabilities if you are a heavy user of Active Forms.

If you think you can reproduce your Active Forms with PAX’s Exploration Mode without using DBRs, or you do not need Excel-based reports at all, then it is a definite “yes” to implement Hierarchies.

Re-architecting your applications to take advantage of the new Hierarchy capability will require some time and effort, but it is an investment that will pay off in the long, as your solutions will become nearly “future-proof” in accommodating evolving business requirements.

READ MORE:

Mastering MTQ with TM1 and Planning Analytics

Multi-Threaded Queries (MTQ) allow IBM TM1 and Planning Analytics to automatically load balance a single query across multiple CPU cores. In other words, TM1 is fast, MTQ makes it even faster. It has been around for a number of years but there still some frequently asked questions which need a clear answer.

 

What is the recommended value for the MTQ settting?

IBM's recommendation is that the MTQ value should be the maximum number of available processor cores. In our experience, it is better to NOT give ALL processor cores to MTQ but leave one or two cores to the server to make sure it has enough room to operate (MTQ=-2 or MTQ=-3 that way the Operating System and other server activities can have some processing time). MTQ=-n means that TM1 will use Total number of available cores - n + 1. For more information about configuring MTQ, you should read this IBM technote:

The MTQ setting is per TM1 instance. If you are running more than one TM1 instance on the same server then the sum of all MTQ value for each instance should not be greater than the number of available cores on the server. For example, if you have two TM1 instances on the same 32 CPU server, one MTQ value could be 20 and the other one 10, the sum of the MTQ should be less than 30.

 

How to find the optimal setting?

Increasing the number of cores on your server to increase the MTQ value will not necessarily make your TM1 application faster. MTQ works by splitting a query into multiple chunks and then combining the results from each chunk into a final result.

There is overhead involved in the process of splitting the query and combining the result. Depending on the size of the data there will be a point where the splitting takes more time than if it larger chunks were used.

Overall, somewhere between 32 and 64 cores, the query hits this tipping point. If your TM1 and Planning Analytics application contains only small cubes, the optimal number of cores could be even less and for applications which have bigger cubes it may be more than 64.

Source: IBM

The only way you can really know the optimal setup is by testing.

 

When is MTQ triggered?

MTQ does not trigger all the time. It will only trigger in one of the following scenario:

  • Queries exceeding 10,000 cell visits.
  • Creation of TM1 ‘stargate’ vew.
  • Views containing the results of rule derived calculations.

To enable/disable MTQ processing when calculating a view to be used as a TM1 process data source, you need to use MTQQuery parameter in tm1s.cfg. You can choose to overwrite the MTQQuery for a specific process by using one of the following function EnableMTQViewConstruct() or DisableMTQViewConstruct().

 

Does MTQ manage the order of queries?

As per IBM, “MTQ value does not specify the total # of CPUs that TM1 can leverage for multiple queries. MTQ defines the # of CPU cores that TM1 may leverage for individual queries”.

In other words, MTQ will be able to split one query into multiple chunks, but it does not manage the sequence of queries. If your queries are queuing, it has nothing to do with MTQ.

 

Differences between MTQ and Parallel Processing

Parallel processing and MTQ are two separate things, the Q in MTQ is for query. So MTQ is only used when you are reading data from the cube, if you are running a straight load, writing data into the cube,  MTQ doesn’t have an influence.

Running a TM1 process might not always trigger MTQ. For example, if a process is using an exsiting view, depending on the view MTQ might not be triggered. The process would need to construct the view to trigger MTQ.

 

The fine-tuning MTQ parameters

To fine tune how MTQ will behave, there are other tm1s.cfg parameters that you can use. The first four below are documented by IBM and the last four are undocumented:

Documented

  • MTQ.CTreeRedundancyReducer
  • MTQ.EnableParallelFacetQuery
  • MTQ.OperationProgressCheckSkipLoopSize
  • MTQ.SingleCellConsolidation

Undocumented settings to be used under direction from IBM

  • MTQ.ForceParallelTxnOnMainWorkUnit
  • MTQ.ImmediateCheckForSplit
  • MTQ.UseThreadPrivateCacheCopyForOperationThreads
  • MTQ.MultithreadStargateCreationUsesMerge
  • MTQ.CTreeWorkUnitMerge
  • MTQ.OperationThreadWakeUpTime
 

Turn on MTQ for feeders (MTFeeders)

By default, MTQ does not trigger when feeders are processed. To enable MTQ for feeders, IBM introduced in Planning Analytics a new tm1s.cfg parameter MTFeeders. By turning on MTFeeders, MTQ will be triggered when:

  • CubeProcessFeeders() is triggered from a TM1 process.
  • A feeder statement is updated in the rules.
  • Construction of feeders at startup.

MTFeeders will provide you significant improvement but you need to be aware that is does not support conditional feeders. If you are using conditional feeders where the condition clause contains a fed value, you have to turn it off.

To turn on MTFeeders during server start-up you will need to add MTFeeders.AtStartup=T.

 

Remove Undo and Redo button on TM1Web

In TM1 Web or in a Cube view from Architect/Perspectives, after inputing a value, a user can undo or redo his/her input.

It is great to be able to redo a change of data but on the other hand, a user clicking the redo or undo button can cause the TM1 server to hang for several minutes.

That is why many people asked if it is possible to remove the Undo and Redo functionality. Even though there is no official settings which can turn on or off this feature, a workaround exists (only for TM1Web).

The workaround consists of a manual change to the TM1 Web css file in order to remove the undo and redo icons. This workaround is however not supported by IBM. It is a hack, and therefore after any software upgrade, you will need to do this again.

This trick has been tested on a Websheet, Cube Viewer, TM1Web URL API and all working as expected. The only problem will be that it require the users to clear their browser cache if they have previously opened TM1Web before.

This workaround works with TM1 10.2.2 and Planning Analytics, however as you can see below the steps are slightly different.

Remove Undo and Redo button with TM1 10.2.2 FPx

In TM1 Web 10.2.2, you can find the Undo and Redo icon just after the paste icon:

To remove these icons, follow these steps:

1. Go to the following directory {tm1 directory}\webapps\tm1web\css\

2. Open the file standaloner.css as administrator

3. Look for .undoIcon and replace the following code

undoIcon {background-image: url(../images/toolbar/action_undo.gif); width: 16px; height: 16px;}

with

undoIcon {display:none;background-image: url(../images/toolbar/action_undo.gif); width: 16px; height: 16px;}

We are adding the display:none css class to hide the Undo icon. In the standaloner.css, the undoIcon appear twice. So you will have to repeat this step a second time.

4. Do the same for the .redoIcon, replace the following code:

.redoIcon {background-image: url(../images/toolbar/action_redo.gif); width: 16px; height: 16px;}

with:

.redoIcon {display:none;background-image: url(../images/toolbar/action_redo.gif); width: 16px; height: 16px;}

5. Save the standaloner.css file.
6. You don't need to restart TM1 Web, just open TM1 Web from any browser, make sure browser cache is cleared. Now you should see that the icons disappeared:

Remove the Undo and Redo button with Planning Analytics

In the TM1 Web version of Planning Analytics, you can find the Undo and Redo icon just after the paste icon:

To remove these icons, follow these steps:

1. Go to the following directory {tm1 directory}\webapps\tm1web\scripts\tm1web\themes\flat

2. Open the file flat.css as adminsitator

3. Look for .tm1webUndoIcon and replace the following code

.tm1webUndoIcon {background-image: url("share/toolbar/images/menu_undo.svg");}

with

.tm1webUndoIcon {display:none;background-image: url("share/toolbar/images/menu_undo.svg");}

4. Do the same for the .tm1webRedoIcon, replace the following code:

.tm1webRedoIcon {background-image: url("share/toolbar/images/menu_redo.svg");}

with

.tm1webRedoIcon {display:none;background-image: url("share/toolbar/images/menu_redo.svg");}

5. Save the standaloner.css file
6. You don't need to restart TM1 Web, just open TM1 Web from any browser, make sure browser cache is cleared. Now you should see that the icons disappeared:

Resolving Circular Reference Calculation

Have you already been stuck with a circular reference in TM1/Planning Analytics?

One of the main reasons why TM1/Planning Analytics has been so successful over the years is its calculation engine. TM1 can resolves very complex calculations over millions of cells in an instant. However, its only weakness is that it will need a little bit of help to resolve an equation with a circular reference. A circular reference is when there is a formula in a cell that directly or indirectly refers to its own cell:

circulardecomp1.png

In the equation above, to calculate the Dealer Margin value, the equation needs the value of the Dealer Margin:

Even though the equation is correct, you will have to resolve this circular reference first before being able to calculate the Dealer Margin value in TM1.

This article will explain you how to resolve this circular reference by using a bit of mathematics:

Circular Reference in TM1/Planning Analytics

TM1 does not do circular references, the following formulae in a TM1 rule will result with #N/A value:

['Dealer Margin'] = (['MSRP']-['VAT']-['Consumption Tax']) * ['DM%'];
['Consumption Tax'] = ['W/S Price'] * ['Consumption Tax %'];
['W/S Price'] = ['MSRP'] - ['Dealer Margin'] - ['Panda Fund'] ;

#N/A in TM1 could mean either there is a division by zero or there is a circular reference in the equation. To solve the circular reference, we will have to manipulate the equations.

Resolving Circular Reference

Our objective is to transform the circular reference system into a linear system:

Okay so let's do it!

Starting Point

We have a sytem with three equations with a circular reference: 

How to resolve this system

  1. Simplify the system by removing the parentheses so it will be easier to manipulate the equations.
  2. Break the circular reference: We need to gather the 3 equations into one main equation.

Simplify the system

First to simplify the system we need first to get rid of the parentheses in the Dealer Margin equation:

Our new system is now as below:

Break the circular reference

Now we need to choose one equation and add the other two equations into this new main equation. You can choose any of the three equations to be the main one, in this example we choose the W/S Price equation:

To break the circular reference, we are going first to replace the Dealer Margin definition into the W/S Price and then we are going to do the same for Comsumption Tax into the Dealer Margin:

After a few steps, the W/S Price equation will not be linked to the Dealer Margin value.

Step 1: Substitute Dealer Margin to the W/S Price formulae

Step 2: Substitute the Consumption Tax with its definition to W/S Price

It's almost done, now we just need to simplify this main equation.

Step 3: Move to the left side W/S Price * Consumption Tax %*DM%

Step 4: Factor the W/S Price

Step 5:  Divide both sides with (1 - Consumption Tax %*DM%)

That is it! Our equation is now:

If we now update our TM1 rule:

['Dealer Margin'] = (['MSRP']-['VAT']-['Consumption Tax']) * ['DM%'];
['Consumption Tax'] = ['W/S Price'] * ['Consumption Tax %'];
['W/S Price'] = (['MSRP'] - ['MSRP']*['DM%'] + ['VAT']*['DM%'] - ['Panda Fund']) / (1 - ['Consumption Tax %']*['DM%']) ;

Refresh the cube and the #N/A will have disappeared:

Remember this when kids say, “I’ll never use this math stuff in real life”…

READ MORE:

The TM1 REST API Collection for Postman

The TM1 REST API is a way of accessing data and almost everything else in TM1. With the TM1 REST API you do things that normal TM1 clients cannot do such as:

  • Get all cubes that share a particular dimension
  • Get all processes that have an ODBC Datasource
  • Get the last 10 message log entries that were referring to a certain process

Tackling the TM1 REST API by yourself can be challenging. To help you to start with, we have gathered all the main TM1 REST API queries in a ready to use Postman Collection. What you just need to do is to download Postman and then follow these steps to set up the TM1 REST API Collection for Postman.

If you are not familiar with Postman, you should read the following article which will explain how to install Postman and how to run your first TM1 REST API Query:

Download the TM1 REST API Collection

A Postman collection lets you group individual requests together. To download the TM1 REST API Collection just click on the following button:

Once downloaded you should see two files:

  • Canvas Sample.postman_environment.json: Contains information about the TM1 instance you want to query.
  • The TM1 REST API.postman_collection.json: Contains all TM1 REST API queries.

Import the TM1 REST API Collection

To import a Collection in Postman, just click the import button in the top left corner and then pick the The TM1 REST API.postman_collection.json file:

Once imported, you can click on the Collections tab where you should be able to see the TM1 REST API folders

Inside these folder, the queries are split into 5 sub-folders:

  • Cubes: Get all cubes, Execute a view...
  • Dimensions: Get all dimensions, create/delete dimensions or elements...
  • Processes: Execute or Update processes...
  • Chores: Execute or Update chores...
  • Administration: Get configureation, sessions, threads...

If you click on the first query Cubes Get, you will see that the URL uses parameters such as {{protocol}} or {{serverName}}:

Instead of hard coding the protocol, server name and httpPortNumber we use variables defined in a Postman Environment. If the variables are red, it means that they are missing in the Environment variables list. In this example there is no Environment setup, on the top right you should see "No Environment":

Create a new Postman Environment

An environment in Postman enables you to save variables that you can then use in the URL. You can choose to create the environment manually or just upload the Canvas Sample.postman_environment file that you can find in the same folder you just downloaded. To import this file, go to Manage Environment:

Click the Import button and then select Canvas Sample.postman_environment.json file:

Once imported you should be able to see the new environment Canvas Sample:

Click on the environment to see all the variables:

if you select now your environment from the dropdown list on the top right, the variables in the URL should turn orange, orange means that Postman has found the variables in the environment selected (if one variable is missing, it is going to be red):

Manage authentication

In the Authorization tab you can notice that it is set to Inherit auth from parent. Instead of defining the credentials for each query, the credentials are stored in one location, the parent folder. To update the credentials, click the edit button of the collection:

Set the proper credential in the Authorization tab. In this example we are using basic TM1 Authentication (mode 1).

More information about how to set up Authorization with CAM Security can be found in this article:

You should now be able to run the query by clicking on the Send button. If you get the following error, it might be due to the SSL:

To disable SSL certificate version, go to File > Settings and turn off the SSL certificate verification:

If you click send, you should now be able to see the list of cubes:

If it still does not work, you should check first your environment variables and then check if the TM1 REST API is enabled for your TM1 instance.

Explore the TM1 REST API Collection

You are all set! You can now run all queries. Do not forget to update the Environment variables to match the information of your TM1 instance and your TM1 objects you want to query such as cube, process and chore.

What to do next?

If you interested of building web based TM1 planning and reporting application, you should have a look at Canvas which is a web development framework.

If you want to integrate systems with your TM1/Planning Analytics application, you should have a look at TM1py which is a Python package that wraps the TM1 REST API in a simple to use library.

READ MORE:

 

 

 

Mastering the TM1 REST API with Postman

Do you want to do more with TM1? Since TM1 10.2, IBM introduced the TM1 REST API which enables you now to do pretty much anything you want with your IBM TM1/Planning Analytics application.

In this post you will find everything you need to know to run your first TM1 REST API query and understand how to read the data.

The TM1 REST API for Dummies

What is the TM1 REST API?

The TM1 REST API is a way of accessing data and almost everything else in TM1. Rather than being a proprietary API like old TM1 interfaces, it is now based on web standards making it accessible to a wide range of developers.  

Why use the TM1 REST API?

The TM1 REST API is fast and there are no external web servers or components you need to install. With the TM1 REST API you can do things that the traditional TM1 clients cannot do; such things as give me all cubes that share a specific dimension or execute MDX Queries to get cube data and much more...

TM1 REST API prerequisites

Since its first introduction, IBM is continuously improving it on every new release. We recommend to use the TM1 REST API with TM1 10.2.2 FP5 as minimum.

How to run your first TM1 REST API query?

A TM1 REST API query is a URL which looks like this:

  • https://localhost:8882/api/v1/Dimensions

It has always the same components:

  • protocol://servername:port/api/v1/resource
    • protocol is either http or https depending if SSL is setup.
    • servername server where TM1 instance is located.
    • port: the port is the httpPortNumber parameter value in the tm1s.cfg.
    • resource: The resource you want to retrieve, e.g. Dimensions to retrieve all dimensions or Cubes to retrive all cubes.

Let's have a look at an easy example. To get the list for all dimensions you can use the following URL in your browser:

  • https://localhost:8882/api/v1/Dimensions

Before running this query you should make sure that REST API is enabled on your TM1 instance. The data you will get will be returned in JSON (JavaScript Object Notation) format.

How to read a JSON format?

As you can see above, in the browser the JSON format is not easily readable but do not worry there are lots of tools online which can help you to format it. For example, if you copy the content from the browser and paste it into the JSON viewer, in the viewer tab you will see the data structure:

Instead of doing these steps manually (running the query and then viewing the data into a JSON viewer), you can use a tool which will do this for you and it is called Postman.

Postman makes the TM1 REST API easy!

Postman is a modern HTTP Client with a fancy interface. It makes interaction with TM1 through the TM1 REST API easier compared to doing it through Chrome, CURL or a programming language.

Download Postman

Postman has a free app that you can download here: getpostman.com. Once downloaded, just run the exe file. The installation should take less than a minute.

Postman will start and it will ask you to sign-in. You can choose to create an account or to click on "Take me straight to the app". The advantage of signing up is that Postman will save your work in its cloud and you will be able to retrieve your work on another laptop after you have signed in.

Run your first TM1 REST API with Postman

To run a query in Postman just copy the same query we used above and paste it in the text input:

  • https://localhost:8882/api/v1/Dimensions

    In this example the TM1 instance uses the basic TM1 Security with user admin and no password. In the authorization tab, we select Basic Auth and then we input username and password. Click the Send button to run the query:

    Ater clicking the Send button if you get the error as above you might have to turn off SSL. To do that, go to File then Settings and un-check Postman' SSL certification verification:

    Then click the Send button. Once setup, you should be able to see the list of dimensions in the Body section:

    If you want to get the list of cubes instead of the list of dimensions you can replace Dimensions with Cubes:

    • https://localhost:8882/api/v1/Cubes

    Run TM1 REST API query with CAM Security

    To run a TM1 REST API query on a TM1 instance using CAM Security you will have to change the Authorization to No Auth.

    First what you need to do is to encrypt your CAM user password. To do that you can use sites such as the base64encode.org website. Click on the Encode tab and type user:password:AD and then click Encode, it is going to encode the string:

    If you are not sure about the AD, you can log in to Architect and check the user. In this example, after logging in to architect, the user is AD\user.

    In Postman, you will then need to set the Authorization type to No Auth:

    In the headers tab, add a new key with Authorization and as value CAMNamespace + the Base64 format encoded authentication

    The TM1 REST API Collection for Postman

    That is it! You are now ready to dig into the TM1 REST API. To help you to start with, we have gathered all the most important TM1 REST API queries into a Postman collection, you can download it here:

    What to do next?

    If you interested of building web-based TM1 planning and reporting application, you should have a look at Canvas which is a web development framework.

    If you want to integrate systems with your TM1/Planning Analytics application, you should have a look at TM1py which is a Python package that wraps the TM1 REST API in a simple-to-use library.

    READ MORE:

     

     

     

    How Cubewise Code will shape the future of IBM TM1/Planning Analytics in 2018

    2018 v2.png

    A lot happened in the TM1/Planning Analytics world in 2017. Canvas has been endorsed by many customers as their new way to build modern web planning and reporting applications. TM1py brought together for the first time, the TM1 and Python communities in order to find new ways to integrate external systems with your TM1 application. 

    In 2018, we continue mastering the TM1 REST API by introducing a brand new product:

    Something that all TM1 developers have been asking for! A new way to build your TM1 models which takes advantage of all the new features of TM1 11 / Planning Analytics. More information to come on this blog or you can contact your local Cubewise office.

    Pulse goes Big Data with Kibana and Elasticsearch

    Buiding reports and analysing the core of your TM1 application such as your TM1 user sessions, TM1 process errors, TM1 process/chore runtime... will become even easier now with Kibana and Elastic Search.

    Kibana is probably the most exciting new feature of Pulse since migration was added. With Pulse v5.7, Pulse can now send data to Elasticsearch, one of the best and most popular Big Data stores. Kibana provides dashboarding / reporting on top of the Pulse data stored in Elasticsearch enabling you to develop your own metrics and share them with your TM1 community.

    Canvas Cube Viewer and Subset Editor

    Canvas will continue revolutioning the way TM1 planning and dashboarding applications are built. Canvas has proven in 2017 that it is a mature, scalable and stable solution for many customers. In 2018, we will make Canvas even better by introducing a lot of new exciting features such as the brand new Cube Viewer, Subset Editor and new samples.

    A new version of Bedrock and TM1py

    In 2018, there will be a new version of Bedrock (v4) which will be designed for IBM Planning Analytics. This will support hierarchies and all the new functions introduced with Planning Analytics Local. We will continue improving TM1py as well, with a lot of new features and articles to inform the TM1 Community about what you can do with Python.

    Exciting IBM TM1/Planning Analytics conferences coming close to you

    This year TM1 and Planning Analytics conferences will be held in four locations:

    • London in April
    • Sydney and Melbourne in August
    • Las Vegas in September

    We will also be at Think conference in March so if you are in Las Vegas drop by and say hello.

    Read more:

     

     

    How to find over feeding in your TM1 model

    Feeders are a crucial part of IBM TM1/Planning Analytics, giving us ad-hoc rule calculations without loss of performance or requiring pre-calculations of results. However, getting feeders right takes a good understanding of the model and how values influence a calculation.

    How to check if a value is fed?

    The first hurdle when working with feeders, is making sure that every calculation that results in a value is fed. This is so important, as in TM1 only fed cells roll up in a consolidation. If a system is underfed, it means that you most likely have missing values when looking at aggregations. Luckily, TM1 has a tool to check for missing feeders, which can be accessed by right clicking on a cell and select “Check feeders”. Any cell not being fed will show up and you can then work on fixing it.

    How to find overfed cells?

    However, the opposite result is overfeeding a system. In this case, rule based cells that result in a zero value are flagged with a feeder. While a small amount of overfeeding might not have an impact, large cubes with a lot of overfed cells will result in much slower end user performance, as the consolidation engine has to check the cells only to find out that the result is zero and does not have an impact on the overall consolidation.

    In order to assess, how many cells are overfed and which feeder statement is the likely candidate, you can apply the following simple trick.

    For the cube you want to analyse, create a shadow cube with the same dimensionality. In our example, we work with the standard demo model from IBM and analyse the SalesCube. The shadow cube we have created is called SalesCube - Overfeeding.

    For the SalesCube - Overfeeding, create a new rule file and add three statements to it.

    SKIPCHECK;
    
    [] = N: IF(
            DB('SalesCube',!actvsbud,!region,!model,!account1,!month) = 0
            ,1
            ,0);
    
    FEEDERS;
    

    The last step in our preparation is to add one additional feeder to the initial SalesCube pointing to our SalesCube - Overfeeding.

    [] => DB('SalesCube - Overfeeding',!actvsbud,!region,!model,!account1,!month);

    Once this is completed you can open the SalesCube - Overfeeding and browse the data:

    Any cell showing up with a 1 is overfed and a candidate to be fixed.

    The idea behind this trick is to check if a cell in SalesCube - Overfeeding is fed even though the value in the same intersection in SalesCube equals 0. 
    If the value is fed in SalesCube - Overfeeding, the value at a consolidation level will be equal to 1. In this case a cell which has the value equals to 0 in SalesCube sends a feeder flag to a cell in SalesCube - Overfeeding cube, meaning that the cell in SalesCube is fed even though the value equals 0.

    How to fix it?

    To understand why this cell is overfed we need to drill down to the lowest level:

    We can see that Gross Margin % has a 1 in the SalesCube - Overfeeding even though the Gross Margin% is equal to 0 in SalesCube. If we have a look at the rule:

    ['Gross Margin%'] = ['Gross Margin'] \ ['Sales'] * 100;

    Instead of being fed by Sales or Gross Margin, Gross Margin% is fed by Units:

    ['Units'] => ['Gross Margin%'];

    In this scenario Gross Margin % is overfed because it is fed by Units. Even though Gross Margin % equals 0, it is still fed because Units equals 10 in Feb.

    Use feeder-less rules

    Feeders can take lots of memory if you work with large cubes. In this scenario where Gross Margin % is always a calculation for N and C levels, you can get rid of the feeder by adding Gross Margin as a children of Gross Margin %:

    Now Gross Margin % is a consolidation and will be "fed" when Gross Margin has a value without having to write a feeder. Using this method to remove feeders will speed up the TM1 server startup time and reduce the size of the cubes or feeders files.

    READ MORE:

    Determine the version of IBM Planning Analytics

    It is not easy to know which IBM TM1/Planning Analytics version is installed on your server, even if you know the TM1 server version number it is difficult to tell from the version number if you are using the RTM (first release) or an interim fix or a fix pack. This article shows all versions number from TM1 9.5.2 to help you to find out what is the exact IBM TM1/Planning Analytics version installed on your server.

    How to find the TM1 server version number

    To find out what IBM TM1/Planning Analytics version you are using, you need to check the version of the TM1 server. There are two ways to find the TM1 server version number:

    1. Open cmplst.txt file

    Depending on the TM1 version, the default location of the cmplst.txt file could be:

    • C:\Program Files\ibm\cognos\tm1: TM1 9.5.2 and lower

    • C:\Program Files\ibm\cognos\tm1_64: TM1 10.1 and higher version

    • C:\Program Files\ibm\cognos\pa_64: PAL 2.0

    Once you have opened the file, look for TM1SERVER_version and you will get the TM1 server version number:

    2. Go to properties of tm1s.exe

    Another way to find the TM1 server version number is to open the properties of the bin64\tm1s.exe file and look for the File version:

    Check the version number with Pulse

    A quicker way to check the TM1 version number is to open the Pulse dashboard, in the server section you will find the TM1 server version which matches the version number in the cmplst.txt:

    Once you know the version number you can check the list below to find out what version, RTM (first release) or interim fix or fix pack is installed in your environment:

    Determine the version of IBM Cognos TM1

    If your TM1 server version number is between 9.5.20000.x and  10.2.20700.x, you should check this IBM article which lists all IBM Cognos TM1 versions from 9.5.2 to 10.2.

    Determine the version of IBM Planning Analytics

    If your TM1 server version number starts with 11, it means that you have installed one of the IBM Planning Analytics version below:

     Planning Analytics Local 2.0

    • tm1s.exe = 11.0.00000.918

    • cmplst = 11.0.00000.918

     Planning Analytics Local 2.0.1

    • tm1s.exe = 11.0.100.927

    • cmplst = 11.0.00100.927-0

     Planning Analytics Local 2.0.1 IF1

    • tm1s.exe = 11.0.101.931

    • cmplst = 11.0.00101.931

     Planning Analytics Local 2.0.2

    • tm1s.exe = 11.0.200.998

    • cmplst = 11.0.00200.998

     Planning Analytics Local 2.0.2 IF2

    • tm1s.exe = 11.0.202.1014

    • cmplst = 11.0.00202.1014

     Planning Analytics Local 2.0.2 IF4

    • tm1s.exe = 11.0.204.1030

    • cmplst = 11.0.00204.1030

     Planning Analytics Local 2.0.3

    • tm1s.exe = 11.1.0.30

    • cmplst = 11.1.00000.30

     Planning Analytics Local 2.0.3 (Version number updated by IBM in Dec 2017)

    • tm1s.exe = 11.1.4.2

    • cmplst = 11.1.00004.2

     Planning Analytics Local 2.0.4

    • tm1s.exe = 11.2.0.27

    • cmplst = 11.2.00000.27

     Planning Analytics Local 2.0.5

    • tm1s.exe = 11.3.0.27

    • cmplst = 11.3.00000.27

    Planning Analytics Local 2.0.5 IF3

    • tm1s.exe = 11.3.3.1

    • cmplst = 11.3.00003.1

    Planning Analytics Local 2.0.6

    • tm1s.exe = 11.4.0.21

    • cmplst = 11.4.00000.21

    Planning Analytics Local 2.0.6 IF3

    • tm1s.exe = 11.4.3.8

    • cmplst = 11.4.00003.8

    For more information about all IBM Planning Analytics version you should check this IBM link.

    READ MORE:

    5 simple ways to be more productive as a TM1 Administrator

    The following article describes how TM1 Administrators using Pulse for TM1, can be more productive and deliver better outcomes for their business users.

    1. Be Pro-Active

    Do no wait for an issue to happen. Pulse for TM1 can send you email alerts before an issue happen. For instance if a TM1 process or chore is taking longer than usual or if the data do not reconcile and many more...

    Do not wait that your TM1 users complain to take action, with Pulse you will be the first to know when and where there is an issue.

    2. Know all possible impacts

    Always keep an overview of even the most complex TM1 models with Pulse’s magic documentation feature including all dependencies. Having current documentation and the ability to view a TM1 model visually via relationship diagrams also means your team can feel confident changing the system knowing all possible impacts.

    3. Be confident before a new release

    Never forget a TM1 object. When migrating a TM1 object, Pulse finds all dependencies which should be included during the migration.

    View changes before migration. It tells you exactly what is going to be updated in the target instance. You can view in details the exact lines of code which are going to be updated.

    4. Easily roll-back changes

    Pulse gives you total transparency about what changed. Every time an object is changed by either a user or system process, it is tracked and logged by Pulse. There is no need to keep lists of what has been changed in your development environments and in production Pulse can provide the needed governance and transparency.  

    Quickly rollback changes, Use the rollback feature to retrieve a previous version of TM1 process or rule file.

    5. Improve testing procedures

    It is fair to say that testing phases are painful for business users. In addition to their daily tasks they have to find some times to test new features before a new release. Unfortunately it often happens that they do not have time to do proper testing. Pulse helps you to ensure that testing procedures have been strictly adhered to by monitoring and tracking the test user’s activity.

    10 Tips to improve your TM1 application

    Over the years your TM1 application will grow, you might create new cubes, create/update rules or TM1 processes, create new elements.... All these changes will have an impact on your application performance. 

    If you want to minimize this impact, here are 10 tips that you should look at. When you design an application, it is always difficult to find the right balance between users requirements, design standards and performance that is why these tips might not all be relevant to your application.

    1. Improve TM1 rules

    As the size and the complexity of systems grow, your TM1 rule files become more complex so if you do not want to lose calculation time, here are some best practices that you can follow:

    • Use skipcheck algorithm.

    • Use TM1 attributes instead of text functions (SUBST, SCAN): Text comparison in TM1 are slower than operating on attributes.

    • Try to create consolidations instead of rules: Consolidation is the fastest way to calculate values:

    Use FTE as a consolidation of Hours with a weight of (1/8)

    instead of

    ['FTE']=['Hours']/8;

    • Area statements are faster than IF statements:

    ['A'] = N: 1;
    ['B'] = N: 0;
    

    instead of

    [{'A','B'}] = N: IF( !Column @= 'A' , 1 , 0 ) ;

    Pulse for TM1 can help you to follow these best practices. The validation report can help you to validate your best practice rules against your model.

    2. MTQ (Multi-Threaded Query)

    Even if TM1 was already fast, IBM made TM1 even faster with the introduction of MTQ with v10.2. Instead of doing the calculation on one CPU, TM1 can now use several CPU at the same time. As IBM recommendations, the best practice is to set the MTQ value such that the maximum available processor cores are used.

    3. Run TM1 processes in parallel

    Parallel processing is a TM1 feature which unfortunately is not enough used. Imagine for instance that instead of running one TM1 process which copy data for one year, you could run one TM1 process which then launches one TM1 process per months (12 processes running at the same time) using TM1RUNTI.

     

    Instead of

    If you want to do parallel processing, I highly recommend you to use Hustle. It is a free tool which will help you to handle the number of threads you want to run at the same time.

    4. Avoid User Locking

    Having users locked and not able to access their data is one of the worst situation for a TM1 Administrator, hopefully it does not happen often in the TM1 world. Here some tips to avoid user locking:

    • Turn off cube logging by using CellPutS( 'NO', '}CubeProperties', 'cubename', 'LOGGING' ) instead of CubeSetLogChanges( 'cubename', 0 ).

    • Avoid dimensions update during working hours ( Split metadata update and data update in 2 different TI)

    • Use CellGetN with cautious, it can create locking.

    • Security Refresh (use processes over rules to update TM1 security).

    • Set up Alerts to be the first to know when there is an issue.

    5. Train your users

    After implementation it is usual for users to develop their own TM1 spreadsheets that may or may not be designed according to best practices. Depending on what they build (Large cube view or do spreading on a high consolidation), it might slow down or even lock TM1 during working hours. In order to avoid these issues you should make sure that they know the TM1 basics:

    • Use VIEW function in a slice.

    • Use DBRW instead of DBR.

    Pulse analyses all Excel workbooks linked to your TM1 application and can help you to identify users who need training.

    6. Restart your TM1 server on a weekly basis

    TM1 elements that are deleted aren't removed from the TM1 indexes until a server restart. So if you are frequently removing and adding elements your memory will grow overtime. Restarting the TM1 instance will help removing temporary files.

    7. Clean dimensions

    The number of elements in dimensions increase over time. Even if it is not an issue for a cube to have lots of "0" cells, having lots of elements in your dimension will slow down all your MDX query or dynamic subsets.

    This is relevant only for Large dimensions (>100,000 elements) such as Product or Customer dimensions. Be very careful during this step because if you delete an element, you will lose the data attached to it.

    8. Snapshot old data

    Rules should be applied only on a specific cube area where data changes. You do not need rules if your data is static such as last year data for example. What you could do is export cube data for the specific year, remove the rules and then load the data back, the data for this year will then be static and it will be much faster to query them.

    9. Tune VMM/VMT

    TM1 keeps the calculation in memory, the first time you open a cubeview, if it takes more time than the VMT value (default is 5 sec), TM1 will keep this view in memory (TM1 creates a stargate view), next time you open the cubeview, it will be much faster because TM1 will open the stargate view instead of recreating the view from scratch.

    VMM is the amount of RAM reserved on the server for the storage of stargate views. Increasing this value will allowed TM1 to store more stargate views which means that TM1 will be much faster but it will consume more memory.

    10. Disable anti-virus on the TM1 data folder

    Virus scan software can negatively impact your TM1 application performance. You should set up your anti-virus to skip TM1 folders.

    Read more: