Carolinian Business Intelligence https://datamensional.com/blog-old Our mission is to create end-to-end, cost effective analytics solutions that provide actionable, relevant, and timely data to decision makers. Mon, 16 Dec 2013 18:13:17 +0000 en-US hourly 1 https://wordpress.org/?v=4.4.15 Cognos Insight Import Data Issue https://datamensional.com/blog-old/2013/12/cognos-insight-import-data-issue/ Mon, 16 Dec 2013 18:11:13 +0000 http://blog.datamensional.com/?p=507 Continue reading ]]> Are you having issues with Insight not being able to import data?  It has been found that the root cause of this is IE 11.  By downgrading IE 11 we will be able to have import data work again.

Symptom:  When selecting “Import Data…” from the Get Data menu Insight will be stuck on this menu, and will not proceed to import data.

Solution:  Follow these instructions to downgrade IE11

http://www.wikihow.com/Uninstall-Internet-Explorer-11-for-Windows-7

 

Note: This issue is for Insight 10.2.1  and earlier.  The issue has been reported that it should be fixed in the next fix pack or release.

 

 

]]>
Designing Cognos Insight https://datamensional.com/blog-old/2013/10/designing-cognos-insight/ Fri, 18 Oct 2013 19:08:41 +0000 http://blog.datamensional.com/?p=467 Continue reading ]]> One of the most powerful parts of Cognos Insight is the ability to add different themes to your workspace.  This can cause your widgets to pop out a little more, and give a little more color.  Sometimes we want a little bit more than what we are given in the built-in themes. At this point Insight only has a few themes. All they give you is a background and a new color scheme.

I will be showing you how to make a theme that fits your needs and the needs of your business. I will be showing how to use basic themes, backgrounds, and buttons.  The place where I like to start is with your own personal logo.

For demonstration purposes, I will be using the Datamensional logo.

Datamensional

We can see that our colors are blue and orange. For that reason, I should use a theme that is similar.  Feel free to test out other themes but try not to use too many colors.  We don’t want to distract the users. The purpose is to keep our attention on the charts and the valuable information that they contain.

The Default Theme

From the very start, we have the option of creating charts.  This is a powerful tool right out of the box, but there is much more functionality and design that we can add. We can make these widgets give us a little more information as well as be more pleasing to the eyes.  Here is a screen shot of a couple of widgets I made from some of our data.

NoEditData

 

Just by making a few minor tweaks we can have these charts provide us with more information and have a more uniform look between tabs.

The Built-In Theme

Using a built-in theme is the simple way to start.  After opening Cognos Insight, near the bottom you can choose a theme, or click the style menu.

Theme.jpg

It’s usually wise to choose a color scheme that will match your logo, if you have one.  For Datamensional the best choice would be Luminance, since the color schemes match.  You are now on your way to a making a basic theme.

Basics on Widgets

Sometimes it is nice to add borders to your widgets so they don’t seem like they are floating in space.  To do this, click the widget menu bar and select “Show widget borders”.  If you double click on an open area you can also write text for your widget titles.  Right clicking on a widget will bring up the widget options as shown below.

WidgetOptions

I have unchecked some of the Axis to keep some confidentiality on our numbers.  It is typically best to have a legend and have X axis and Y axis selected.  I have selected Show Grid Lines, this will help us determine values easier.

Lastly, to add a logo, just drag and drop the logo file directly onto your workspace.  With all of these add-ons this is now what our workspace looks like.

DatamensionalTheme

Themes for Advanced Users

This method is best for someone who is skilled with XML.  I will not go into full detail but I will tell you where to find a file to edit.  First navigate to:

C:\Users\UserName\AppData\Roaming\IBM\Cognos Insight\configuration\config_10.2.2254.0\org.eclipse.osgi\bundles\67\1\.cp\model\themes

Next, choose a theme and unzip it.  It would be best to copy the files and rename them.  From there feel free to plug away and change what you need from various fonts to styles.

When creating a background we should make it 1152 x 864.  That’s the maximum space that we can use for a screen shot when using the print to PDF feature. To create backgrounds you can use a free tool called GIMP.  GIMP can be downloaded at http://www.gimp.org. And, there you have it. Enjoy! Be sure to tell us what you think!

]]>
How to Work With Dates in Cognos Insight https://datamensional.com/blog-old/2013/07/how-to-work-with-dates-in-cognos-insight/ Fri, 19 Jul 2013 18:52:06 +0000 http://blog.datamensional.com/?p=427 Continue reading ]]> Dealing with dates in Cognos can sometimes be a daunting task.  Dates should be straightforward, but sometimes they take extra work to get what you want.  Your data is designed to list the date of the transaction, but maybe you only care about data on a per week, per month, or per quarter basis.  Obviously, your data isn’t arranged according to these parameters, but it is quite easy to set your dates the way you want.

Let me demonstrate how to adjust dates, and get them to work just the way you want, by using some demo data.  I will start by assuming we have already imported our data.

  • In the overview area I will select “Close_Date” for rows, and for the columns I will select our measures and select amount. Also in columns, I will select closed business only. My numbers look like this.

Screen Shot 2013-07-10 at 1.37.43 PM

  • This works great when we want only data.  But what if we just want last month, or just the last quarter?  It’s quite simple, just a few steps and you can have a dynamic date that will change with the current date.Screen Shot 2013-07-10 at 1.55.12 PM
  • Expand the content pane by clicking the button in the top right.
  • Right click the dimension you want to edit, mine will be Close_Date, and select Edit.
  • On the right of copy and paste select the button called Time Roll Ups.                      Screen Shot 2013-07-10 at 1.59.17 PM
  • I will select Current Quarter, Once this is done we can see all of our roll ups.  Click close to get back to our main workspace.

 Screen Shot 2013-07-10 at 2.16.03 PM

 

  • Now in our overview area we can open close the date and select the current quarter.  We now only get the relevant information for this quarter.  As we enter a new quarter this information will update.
  • For this current quarter, we can see that we have only closed two deals using a column chart.

 Screen Shot 2013-07-10 at 2.20.14 PM

 

  • It is also nice to add a roll up for a past quarter.  In doing so you can review on how the current quarter relates to the past quarter.  I will repeat the steps above and add previous quarter.   You can now see that my current quarter is quite a bit under the previous quarter.  But we can say this is okay as we are only 1/3 of the way into the quarter.

Screen Shot 2013-07-10 at 2.22.47 PM

 

We started working with data that only had individual dates.  We wanted to use these and collapse them to a per quarter basis.  We were able to set up roll ups to give us our current quarter and previous quarter.  Using a column chart we can visually see how the two quarters compare.

Dates are easy to work with and are one of the most important dimensions when dealing with data.  It’s always to know when your sales happen or how sales should look in the future.  Using these roll ups we can better achieve widgets that are useful and up to date every single time it is opened, and will not need maintenance.

]]>
Let’s Make A Deal! Intuition, Analytics, and Answering the Right Question https://datamensional.com/blog-old/2013/06/lets-make-a-deal-intuition-analytics-and-answering-the-right-question/ Wed, 26 Jun 2013 23:06:46 +0000 http://blog.datamensional.com/?p=397 Continue reading ]]> I was wandering around YouTube one weekend looking for something mentally stimulating when I came across the Monty Hall problem.  For those who are not familiar, the problem is named after the host of the gameshow Let’s Make A Deal.  On the show, there was a game where the contestant would be provided 3 doors; One had a car behind it, the others had goats.  The contestant would choose the door they think had the car behind it, and then Monty would help them out by revealing one of the goats.  The contestant would then be given the chance to stay with their first choice, or switch to the other unopened door.  So what would you choose, and does it even matter?  I generated some data to simulate it, and the results will probably surprise you.

Most people’s intuition will tell them that it doesn’t matter which they choose (the odds are even), but the above chart shows otherwise.  Out of 10,000 generated attempts, a third of them win if they stay, and the rest win if they switch.  Now that you’ve seen this simple chart, which would you choose?  The choice seems pretty obvious given the data in front of us.  Without this chart, most people would not even second guess their incorrect assumption.

What is interesting is that almost everyone gets this problem wrong, including many people with PhDs in Mathematics when this problem was first proposed (there are currently over 40 papers written on this problem and its variants).  There are different explanations for this counter-intuitive phenomena that you can find all over the web if you look up “Monty Hall Problem,” so I won’t get too much into that.  The chart is all you need to draw the correct conclusion.

Most people know enough about probability to handle basic odds.  Flipping a coin and getting heads will happen half of the time, and similarly for rolling a specific number on various dice.  In the same respect, picking a car from behind a series of doors is one out of however many doors there are.  If I had asked you what are the odds a given door has the car behind it, you could easily tell me the answer.

When randomly picking a door out of 3 (like in the Monty Hall problem) it is a one third chance of winning.  If randomly picking a door out of 2, it is a one half chance. If you flipped a coin to decide whether you stayed or switched after Monty revealed a goat, the even odds that most people expect would be true.  So it isn’t that people’s intuition is wrong, it is just answering the wrong question!

The fundamental issue is that most people are assuming the doors are always independent like a coin toss.  The doors are closer to a deck of cards though.  Knowing the order of cards doesn’t change the odds of a random pick, especially if you only find out after you drew your card.

Now let’s say that you had some strange business that relied on people picking cars instead of goats.  Average value of a goat is about $500, and a new car is about $30,000.  The chart above shows the difference in average value between the 3 options.  Staying is clearly a poor choice, as it is almost half of what always switching would bring.  The intuitive choice of just letting people decide for themselves (since it doesn’t matter) is better, but still only about 3/4 of what you would get with switching.

So how much can you trust your intuition?  When your personal reputation, money, company, and employee’s well being is on the line, can you afford to not make the best choice?  Can you afford to not reassure your peers and colleagues that your decisions are sound?  The right information can be worth a car.

Or at least a goat.

Want to play with the data yourself? You can get the .cdd here.
Don’t have IBM Cognos Insight?  You can get that here.
Want to ask us how we can help you make better decisions?  You can do that here.

]]>
Joining Columns In Cognos Insight https://datamensional.com/blog-old/2013/05/joining-columns-in-cognos-insight/ Thu, 30 May 2013 20:15:50 +0000 http://blog.datamensional.com/?p=378 Continue reading ]]> One of the benefits of Cognos insight is the ability to combine columns during the import process.  This can help us when browsing the data through explore points.  It is often beneficial to combine columns as it is easier to read, and not always necessary to sort by columns individually.

Import Process

  • Open Cognos insight and click the get data button at the top, and select “Import Data…” from the drop down menu.
  • Select Browse and select your data source and click Open.
  • We can now see a preview of our data, click the Advanced button at the bottom. (I will be using a sales data for an electronics store) We can now see all of our measures and dimensions.

  • We now want to Ctrl+Click the columns we want to combine; I will combine Month and Product Type.  These are located under “Source Items” on the left hand side.
  • Right click and select “Do not map”, if you would like to the single column.  Otherwise leave the mapped if you would like to sort by the added column or individually.
  • We must be sure that each of the columns are text and not numbers.  So if we had chosen something that was a number, we will need to change it to a string or text field.
  • The top right corner will say “Show Properties”. Select that to pen up the properties window.
  • Select Data type: and make sure it says text.
  • Select “Add Calculated Column in the bottom left corner.
  • Drag the column you want to add followed by a “Pipe” to append text or another column.  I have added Month with a hyphen and then Product Type.
  • Click Preview and make sure it looks correct. Then click import.

Explore Points with new Joined Column

  • Restore the widget from the top right corner.
  • Under your Data menu on the right expand your cube and drag over your new expression. If you do not have the data menu press this button  in the top right of your workspace.
  • You can now sort your widget by a combined column.
]]>
Using MySQL (or other ODBC connections) with Cognos https://datamensional.com/blog-old/2013/04/using-mysql-or-other-odbc-connections-with-cognos/ https://datamensional.com/blog-old/2013/04/using-mysql-or-other-odbc-connections-with-cognos/#comments Tue, 23 Apr 2013 14:57:01 +0000 http://blog.datamensional.com/?p=351 Continue reading ]]> MySQL is quite possibly the most commonly used database today due to its free license tier making it a very low risk investment.  While it is not well geared towards the large queries of reporting and analysis, there may be a need to report directly from an existing MySQL database for any number of reasons.

The issue with MySQL in Cognos, is that it is only supported via general ODBC drivers.  The trick to remember with ODBC drivers is that even if the OS and Cognos installation are 64-bit, ODBC drivers must be 32-bit to work with Cognos.  Interestingly enough, Cognos Insight can use 64-bit drivers (probably due to it inheriting more from TM1 than Cognos proper).  Also note that ODBC drivers cannot run in Dynamic Query Mode.  This guide will step you through the process of setting up a MySQL data source specifically, but can be easily adapted to any ODBC driver.

Download and 32-bit ODBC drivers

  • Go to the MySQL ODBC download page and download the 32-bit version of the drivers.  The msi version is probably the more convenient of the two, but the zip version should work as well.
  • Once downloaded, a simple double click of the msi should start the installation wizard (the zip version is probably different, it should have instructions included).

Set up a 32-bit ODBC connection

  • In 64-bit versions of windows, the default ODBC connection program that can be accessed from the start menu is the 64-bit version.  To open the 32-bit one, go to Start->Run and enter the following: C:\Windows\SysWOW64\odbcad32.exe (this may vary slightly based on the system setup).
  • Select the System DSN tab and click the Add button.  This allows any user access to the connection information. 
  • From the list, select the MySQL 3.51 Driver from the list (version number may vary).
  • Set up the connection info to reflect the MySQL database you are trying to connect.
  • Press the OK button and it should now show up in the list.

Create a new Cognos Datasource

  • Open a web browser and log in as an admininstrator to Cognos.
  • If  not already open, go to IBM Cognos Administration and select the Configuration tab.  The Data Source Connections should be automatically selected. 
  • Select the New Data Source button on the upper right part of the screen.
  • Give it a name and press the Next button.
  • Select ODBC from the Type list. Leave everything else as it is and hit Next.  
  • Enter the ODBC data source, User ID and Password as they appear on the ODBC database.  Use the Test Connection link on the bottom to confirm the setup. 
  • Press Finish and the connection should now be visible in the list: 

Change the Project Query Mode

  • Open up Framework Manager and project the data source is for.
  • Select the Project at the top of the tree view on the left side of the window.
  • Under the properties tab at the bottom of the screen, change the Query Mode option from the default Dynamic to Compatible. 

It should now be possible to use this MySQL database in any packages in this project.

]]>
https://datamensional.com/blog-old/2013/04/using-mysql-or-other-odbc-connections-with-cognos/feed/ 1
Raising the Bar with Mobile BI https://datamensional.com/blog-old/2012/09/mobile-bi/ Fri, 07 Sep 2012 18:19:39 +0000 http://blog.datamensional.com/?p=308 Continue reading ]]> (Enjoy a sneak peak of some dashboard screenshots as you read)

In recent years, mobile device sales have outshined those of personal computers. One survey reported that 95 percent of employees purchased a mobile device with the intent to use it for work. For many this has meant discovering the convenience of the smartphone, taking full advantage of the opportunity to integrate applications and conduct business in the palm of their hand. By 2015, 50% of devices used in business organizations will have gone mobile.

Newer and also larger than the smartphone is the tablet. These are a convenient replacement for a laptop when you’re on the go as they have foregone the traditional keyboard for an on screen version. However, they are still capable of many tasks you would find on your normal desktop or laptop.

________________________________________________________________

Example – Dashboard for Sales Executive Depicting Revenue Trends:

Image captured on IPad

 

Curious about implementation in the workforce?

  • Mobile BI has allowed retailers to make the best decisions for their businesses without the hassle of having to always be at their desk. With the tap of a finger they are able to access information on how well their marketing campaigns are doing, or review buying patterns – all while catching a cab to their 2:00 production meeting.
  • In the financial services industry, bankers are being enabled to make better decisions and analyze risk levels with greater accuracy.  Information is being integrated between silos creating ONE set of numbers increasing transparency.
  • Recent research has revealed that about 3 out of every 10 doctors are currently using IPads at work, making it easy to obtain important patient information at a moments notice.

Business Intelligence applications for tablets and cell phones are now an integral part of the mobile world. Many of the tools used by Datamensional consultants have mobile capability, and we are doing our part to make it easier than ever to keep your data close at hand. Call us for more information on optimizing your BI experience from your mobile device.

1-888-966-DATA (3282)

Additional Screenshots to convey potential for Mobile BI:

Dashboard Depicting Key Performance Indicators for the Cincinnati Zoo:

Image captured on IPad

Example of Business Analytics Delivered via Android Smartphone:

Image Captured on Android Smartphone


]]>
Big Data, What is it Exactly? Datamensional’s Take https://datamensional.com/blog-old/2012/07/big-data/ Wed, 11 Jul 2012 16:51:58 +0000 http://blog.datamensional.com/?p=290 Continue reading ]]> Joseph A. di Paolantonio
Benjamin B. Goewey

Much of the current hype in Data Management & Analytics today is around the concept of Big Data, and Hadoop is at the center of the hype storm. The other two hot areas are Mobile & Elastic Cloud Computing. Cloud is central to both Big Data & Mobile implementations. This blog post will focus on Big Data, and how Datamensional has helped its customers meet this challenge with tools from Pentaho, Microsoft and IBM that work with Hadoop and other NoSQL data management systems.

In 2010 February, I suggested this approach to big data:

“Big data really isn’t about the amount of data (TB & PB & more) so much as it is about the volumetric flow and timeliness of the data streams.  It’s about how data management systems handle the various sources of data as well as the interweaving of those sources.  It means treating data management systems in the same way that we treat the Space Transportation System, as a very large, complex system.”

Many now flock to the definition of Big Data as three Vs. These Vs were debated hotly throughout 2011, on Twitter, blogs, and journals, with more Vs added. One good example can be found in R. Ray Wang’s article on Forbes:

http://www.forbes.com/sites/raywang/2012/02/27/mondays-musings-beyond-the-three-vs-of-big-data-viscosity-and-virality/

The three Vs on which everyone agrees are:

  • Volume – essentially, more data than you are accustomed to handling on your current computing platform, whether that’s Excel on your laptop, Oracle on a *nix box, or SAS on a mainframe
  • Velocity – from request a report from IT & wait a week or three, to near-real-time, [undefined, but not really] real-time and streaming data (a.k.a. Continuous Event Processing (CEP))
  • Variety – typically defined as structured (Entity Relationship Diagram (ERD) or schæma-on-write modeled data in your standard RDBMS), semi-structured(such as XML] or unstructured (with email, documents & tweets as common examples)

In the article cited above, Ray adds Viscosity and Virality (not to be confused with virility).  Viscosity can be seen as anything that impedes the interweaving or flow of data to create insights. Virality – how quickly an idea goes viral on the interwebs – or the rate at which ideas are dispersed across various Internet or Social Media sites (Twitter, YouTube, LinkedIn, Blogs, etc.).

Compare these five Vs to my definition of Big Data, given above; interweaving a heavy volumetric flow of multiple types of complex data from a variety of sources.

There are many sources of Big Data, both internal and external. To name just a few:
– Social Media
– Smartphones
– Weblogs
– Server & Network Logs
– Sensors
– The Internet of Things

Getting tons of TB to PB of data off of one data center into one cloud or vice-versa, or from one cloud to another, is logistically ridiculous. What’s in a cloud, generally stays in that cloud. Getting Analytics closer to the data is paramount for any practical application. Many vendors have recognized this, and over the past few years, the Analytic Database Management System (ADBMS), Hadoop and Cloud *aaS (* being software, infrastructure, platform, or data – as a Service) markets evolving at a pace not seen in decades, with Pentaho, SPSS, SAS, The R Statistical Language, and other analytical software being embedded in ADBMS or Cloud offerings from Teradata/Asterdata, EMC/Greenplum, HP/Vertica, SAP/Bobj/HANA/SybaseIQ, Oracle and IBM/Cognos/Netezza on the one hand, and AWS, MS Azure & Cloudera on the other.

Among others, IBM, Microsoft & Pentaho offer tools to improve Analytics out of Hadoop and other Big Data data sources. This is the most important for Datamensional. Let’s look at one customer case study, using Pentaho Hadoop Data-integration [PHD], Hadoop HDFS & Hive, and 50,000+ rows of data from one cell phone every minute. I had the honour of working with Ben & Gerrit of Datamensional on this project.

The Status Quo: Homegrown reporting & ETL solution written in Java and leveraging some open source tools and libraries to transform cell tower log files into CSV files for loading into an Oracle Datawarehouse with web-based administration & reporting.

The Business Need: Provide exploratory & in-depth Analytics capabilities to customer business analysts on a rich data set that was growing at a mind-boggling pace as Smartphone use opened new avenues for services such as location-based advertising and understanding cell phone user habits. The sample data showed 50,000 records describing a single mobile device usage over a one minute time period.

The Datamensional Solution: A Proof of Concept comparing the homegrown solution against Hadoop & Hive with the integrated Pentaho solution for Hadoop, including PHD and the pluggable architecture of the Pentaho BI Server. The PoC provided in-depth solutions in 6 areas:

  1. Installation, configuration and performance comparisons of using Pentaho Hadoop Data Integration.
  2. Demonstrating Pentaho’s Plug-in Architecture and use as a BI Platform
  3. Connecting Pentaho Analyzer to third party OLAP engines
  4. Demonstrating Pentaho Clustering & Parallel Processing capabilities
  5. Customizing the look & feel of the Pentaho User & Administration Consoles
  6. Automating the Installation Process including customizations

The Results: All points of the PoC were exceeded. While all six points are important for developing a Big Data solution or product using Pentaho, the first & fourth points, regarding PHD & Clustering are the most important for Big Data & Big Data Analytics. Other areas of the PoC showed the flexibility of the Pentaho Business Analytics platform for Reporting, OLAP,  Data Mining & Dashboards using Pentaho and other solutions, both from the Pentaho Community and as integrated by the Datamensional team.

One amazing result was that replacing the Hadoop libraries and native Hive JDBC with the Pentaho PHD versions improved response time on a simple report from so long that the customer killed the job rather than wait any longer with their homegrown solution, to less than 10 seconds with PHD. Of note, is that the PHD libraries replace the native Hadoop lib directory & files on the Hadoop name node and ALL data nodes. Clustering PDI by installing the lightweight Jetty server, Carte, on each node, parallelizes data integration and increases throughput for Hadoop & Hive. PDI provides many mechanisms to tune the performance of individual transformation steps and job entries, as well as clustering, parallelizing and partitioning for Transformations and Jobs.

The PHD libraries allow PDI and Hadoop to each bring their strengths to the data management challenges of moving, controlling, cleaning and pre-processing extreme volumetric flows of data. Using the native Hadoop libraries, a load of the sample data took over 5 hours.  This was reduced to 3 seconds using the PHD libraries and the PDI client Spoon to create transformation and orchestrate the job among the various clusters.

Additional Resources:

Wikipedia Definition

TDWI – The Three Vs of Big Data Analytics

O’Reilly Radar: What is Big Data?

Quora: What is a Good Definition of Big Data?

Datamensional Resources:

IBM on Big Data

Datamensional’s Big Data Integration Service

Pentaho Big Data Preview

Pentaho Community: Efficiency of using ETL for Hadoop vs. Code or PIG

Microsoft Case Study: MS BI and Hadoop

]]>
Xaction Basics – FTPing a file https://datamensional.com/blog-old/2012/03/xaction-basics-ftping-a-file/ Tue, 20 Mar 2012 14:30:41 +0000 http://blog.datamensional.com/?p=278 Continue reading ]]> Here is another little xaction that demonstrates the FTP capabilities of xactions. Just like the last one, make sure to save it in the workspace directory for Design Studio so that it displays properly when you try to edit it.

Open the ftp_prpt_report_w_parm.xaction file in Design Studio (Eclipse).

You will want to change the default values for each input.  Notice if you left it the way it was and ran it from PUC, you could change the default values to your FTP settings.  At first I tried to use an @ sign in the username from our Datamensional.com server, but the solution will not work with this.  It causes a problem because the output part of the .XACTION uses the @ sign to separate the username from the server name.  There may be a way to get around it through escape characters.

Go to your inputs in Design studio and change all the defaults for the following:

  • reportname
  • ftp_host
  • user
  • password
  • directory

Keep all others their default.  You will want to change the solution listed under Resources to one of your own reports as well

Save it to the solution directory in Design Studio.  In the BI Server, refresh the repository and double click on the .XACTION.  After completing successfully, you’ll see “Action Successful.” Log into the directory that you saved the FTP to, and it should be there.

]]>
Xaction Basics – Sending an Email https://datamensional.com/blog-old/2012/03/xaction-basics-sending-an-email/ Thu, 15 Mar 2012 14:30:38 +0000 http://blog.datamensional.com/?p=272 Continue reading ]]> There are moments when working with the BI server that you will desire functionality that isn’t necessarily available through the server itself or any plugin that is currently out there.  In these situations, you will need xactions, which can be a little intimidating at first glance.  For this reason, we’ll provide some xactions that do some basic things for you to look at.  If you’d like to see a more in depth general introduction, check out this techcast by Mike Tarallo.

If you are coming from a fresh install with the sample database, you should have no problem running this .XACTION.  The only changes you need to make is to change the receiving email address, and the report being sent.  You should receive an email from the email you have set on your BI server as a default.

Make sure to put the file in the workspace when you open it through Pentaho Design Studio, otherwise it will not display any values.  The XML can still be edited this way, however.

After making your modification, save it to the solution directory from Design Studio (Eclipse) and go to PUC and refresh your repository.

Then double click on the solution.  You should see the following default message that shows the Action Sequence (.xaction) was successfully executed:

This is not all that pretty and actually could be another message.  It is simply saying that it was completed successfully and that long number is the unique ID for the document just created in the content repository.  At the end it shows what format it was.

You can execute this step using a URL into another application, another part of the suite, inside of PRD, or CDF.

]]>