Quantcast
Channel: Tableau – Michael Sandberg's Data Visualization Blog
Viewing all 292 articles
Browse latest View live

DataViz: Squaring the Pie Chart


Tableau Customer Conference 2014 (TCC14): Keynote with Christian Chabot and Chris Stolte on the Art of Analytics

Bryan Brandow: Triggering Cubes & Extracts using Tableau or MicroStrategy

$
0
0

trigger-720x340

bryan-headshots-004Bryan Brandow (photo, right), a Data Engineering Manager for a large social media company, is one of my favorite bloggers out their in regards to thought leadership and digging deep into the technical aspects of Tableau and MicroStrategy. Bryan just blogged about triggering cubes and extracts on his blog. Here is a brief synopsis.

One of the functions that never seems to be included in BI tools is an easy way to kick off an application cache job once your ETL is finished.  MicroStrategy’s Cubes and Tableau’s Extracts both rely on manual or time based refresh schedules, but this leaves you in a position where your data will land in the database and you’ll either have a large gap before the dashboard is updated or you’ll be refreshing constantly and wasting lots of system resources.  They both come with command line tools for kicking off a refresh, but then it’s up to you to figure out how to link your ETL jobs to call these commands.  What follows is a solution that works in my environment and will probably work for yours as well.  There are of course a lot of ways for your ETL tool to tell your BI tool that it’s time to refresh a cache, but this is my take on it.  You won’t find a download-and-install software package here since everyone’s environment is different, but you will find ample blueprints and examples for how to build your own for your platform and for whatever BI tool you use (from what I’ve observed, this setup is fairly common).  Trigger was first demoed at the Tableau Conference 2014.  You can jump to the Trigger demo here.

I recommend you click on the link above and give his blog post a full read. It is well worth it.

Best regards,

Michael


Filed under: Bryan Brandow, Bryan's BI Blog, ETL, MicroStrategy, Tableau, Triggers

2015 Gartner Magic Quadrant for Business Intelligence and Analytics Platforms – Tableau Wins Again

$
0
0

2015 Gartner Magic Quadrant BI & Analytics

 

Click on Image to Read the Report

Tableau’s intuitive, visual-based data discovery capabilities have transformed business users’ expectations about what they can discover in data and share without extensive skills or training with a BI platform. Tableau’s revenue growth during the past few years has very rapidly passed through the $100 million, $200 million and $300 million revenue thresholds at an extraordinary rate compared with other software and technology companies.

Tableau has a strong position on the Ability to Execute axis of the Leaders quadrant, because of the company’s successful “land and expand” strategy that has driven much of its growth momentum. Many of Gartner’s BI and analytics clients are seeing Tableau usage expand in their organizations and have had to adapt their strategy. They have had to adjust to incorporate the requirements that new users/usage of Tableau bring into the existing deployment and information governance models and information infrastructures. Despite its exceptional growth, which can cause growing pains, Tableau has continued to deliver stellar customer experience and business value. We expect that Tableau will continue to rapidly expand its partner network and to improve international presence during the coming years.


Filed under: Business Intelligence, Gartner, Magic Quadrant, Tableau

Tapestry Conference 2015: Odds and Ends

$
0
0

Readers:

Had a great conference and want to share various odds and ends from the last two days.

Hope to post more in a day or two.

Enjoy!

Michael

The Graduate Athens – Funky, Eclectic.

IMG_2656 IMG_2657 IMG_2658 IMG_2688

Breakfast – Shrimp and Grits

IMG_2663

The Athens, GA Double Barrelled Cannon. Failed miserably. Ellie Fields: Trying and failing is O.K.

cannon1 IMG_2668

 Catherine Madden (#catmule) Sketches

B_RWADqU8AEMB4q B_RzOthW0AAnRTT

World History in One Picture, Popular Science, July 1930

IMG_2675

Seven Data Story Types – Ben Jones

IMG_2672 IMG_2673

Me at the Demo and Poster Session

IMG_2685

Kim Rees, Periscopic.com, Gun Deaths in 2013 [Click Image to Watch Interactive Visualization]

Gun Deaths 2013

Sound Bites

Let us go forth and build double barreled cannons and deed trees to themselves. -Ellie Field

Revelation is based on prior knowledge. -Hannah Fairfield

Show what you know as well as what you don’t know. -Hannah Fairfield

All storytelling is manipulation. -Ken Burns

Do good with data. -Kim Rees

Data only has so much elasticity before it breaks down. -Kim Rees

 


Filed under: Ben Jones, Catherine Madden, Robert Kosara, Tableau, Tapestry Conference

Tapestry Conference 2015: Interesting Visualizations From Presentations (and more Odds and Ends) – Part 1

$
0
0

Readers:

More great information from the Tapestry Conference.

Enjoy!

Michael

The Graduate Athens – From Hotel Directory

IMG_2689 IMG_2690

Chad Skelton – Income Calculator (White Male vs Black Women) – Inequality in Earning Income for the Same Job

Tapestry Income Calculator

 

Tapestry Income Calculator White Male

 

Tapestry Income Calculator Black Female

More Catherine Madden (#catmule) Sketches

B_TSRFjW8AATDpy.jpg large

RJ Andrews, Info We Trust, Creative Routines

creative routines - close up

“We all have the same 24 hours that Beyoncé has” and its various iterations took the web by storm in late 2013 as the megastar became the figurehead of not only having it all, but being able to somehow do it all too.

How do creatives – composers, painters, writers, scientists, philosophers – find the time to produce their opus? Mason Currey investigated the rigid Daily Rituals that hundreds of creatives practiced in order to carve out time, every day, to work their craft. Some kept to the same disciplined regimen for decades while others locked in patterns only while working on specific works.

creative-routines-edit4

 

Kim Rees, Periscopic.com, How Nations Fare in PhDs by Sex [Click Image to Watch Interactive Visualization]

How Nations Fare in Ph.Ds by Sex


Filed under: Chad Skelton, Hannah Fairfield, Info We Trust, Kennedy Elliott, Kim Rees, RJ Andrews, Tableau, Tapestry Conference

DataViz Using Tableau: A History of Crayola Colors

$
0
0

Readers:

Here is a great dataviz from Tableau Public.

It was created by Stephen Wagner and was originally published on Analytics Wagner.

Stephen Wagner explores the evolution of Crayola colors, from 1903 until now.

Click on a crayon in the “Box of Colors” to learn the name of the color, how long it has been in production, and any additional facts.

Enjoy!

Michael

History of Crayola Colors


Filed under: Colors, Crayola Crayons, Tableau

DataViz Using Tableau: Another Way of Looking at Graduation Rates

$
0
0

Readers:

Jon BoeckenstedtJon Boeckenstedt (photo, right), who works in Enrollment Management for DePaul University, created this data bursting visualization using Tableau.

Jon’s thought processes on this and why he created the visualization he created are noted below.

What do you think of this visualization and as Jon asks: What do you see in the data?

Best Regards,

Michael

Another Way of Looking at Graduation Rates

Jon saw an article in his Facebook feed about college ROI, although it was called the 50 Best Private Colleges for Earning Your Degree on Time. As is often the case, there was nothing really wrong with the facts of that article: You see a nice little table showing the 50 Colleges with the highest graduation rate.

But it got Jon thinking: What if high graduation rate wasn’t enough?  What if a considerable portion of your freshman class that graduates takes longer than four years to do so? Is that a good deal?  He then created some hypotheticals:

College A: 1000 freshmen, 800 who graduate within four years, 900 who graduate in five, and 950 who graduate in six.  So the four-, five-, and six-year graduation rates are 80%, 90%, and 95%.  But of the 950 who eventually graduate, only 84.2% do so in four years.

College B: 1000 freshmen, 750 who graduate within four years, 775 who graduate in five, and 800 who graduate in six.  So the four-, five-, and six-year graduation rates are 75%, 77.5%, and 80%. Thus, of the 800 who eventually graduate, almost 94% do so in four years.

College C: 1000 freshmen, 550 who graduate within four years, 600 who graduate in five, and 625 who graduate in six.  So the four-, five-, and six-year graduation rates are 55%, 60%, and 62.5%. Of the 625 who eventually graduate, 88% do so in four years.

If you were choosing among these three colleges, which might you choose?  The easy money says you go with College A, the one with the highest graduation rate. College B would be your second choice, and C would be your third.  But what if you are absolutely, positively certain you’ll graduate from the college you choose? College B is first, then College C, then College A.

Data can be tricky. Jon has noted many times in the past that things like graduation rates are really almost inputs, not outputs: If you choose wealthy, well-educated students, you’re going to have higher graduation rates.  It’s a classic case of making a silk purse out of, well, silk.

Jon tried to demonstrate this in the visualization he created below, and he likes the simplicity here.  Each dot is a college (hover over it for details).  They’re in boxes based on the average freshman ACT score across the top, and the percentage of students with Pell along the side.  The dots are colored by four-year graduation rates, and you should see right away the pattern that emerges.  Red dots (top right) tend to be selective colleges with fewer poor students.

But if you want to look at the chance a graduate will finish in four years, use the filter at the bottom right.  Find a number you like, pull the left slider up to it, and see who remains.  (Just a note: Jon is a little suspicious of any number of 100% on this scale, which would mean absolutely no students who graduate take longer than four years to do so.  It might be true, but it’s hard to believe. But he would set the right slider to 99% at the most.)  Jon points out to remember there is a lot of bad IPEDS data out there, so don’t place any bar bets on what you see here.

What do you see? Click on the image below and find out.

Graduation Rates


Filed under: Education, graduation rates, Interactive Data Visualization, Jon Boeckenstedt, Tableau

Tableau: Star Wars Sentiment Analysis

$
0
0

Readers:

The Force is strong with Tableau users lately. It all started when Tableau built a Web Data Connector (WDC) to the Star Wars API. Then, they sat back and watched as the Tableau community put the data to work.

The Star Wars data fever didn’t end there, though. In the nearly two weeks since Episode VII’s release, dozens of visualizations have popped up all over the web.

Here is one of my favorites.

Star Wars Sentiment Analysis: I could spend hours playing with this visualization by Adam McCann. In fact, I recommend grabbing a bucket of popcorn, sit down with your favorite Wookie and some nerfherders and walking through his entire analysis of A New Hope, in chronological sequence.

May the Force be with you!

Michael

Star Wars Sentiment Analysis

star_wars_sentiment_analysis

 

 


Filed under: Data Visualization, Dataviz, Star Wars, Tableau, Uncategorized

What the hell happened today? Gartner Magic Quadrant for BI and Analytics Platforms, 2016

$
0
0

Hello Readers:

What the hell happened today?

First, let’s look at Tableau Software’s stock price.

Tableau Stock Price

Tableau Software lost half its market value and its shares hit an all-time low a day after it cut its full-year earnings guidance to between 22 cents and 35 cents a share, around half the 57 cents analysts had expected. Tableau shares were last down 49.44 percent at $41.33.

The company cut its full-year 2016 revenue forecast to $830 million to $850 million from prior guidance of $845 million to $865 million. And Tableau’s fourth-quarter revenue of $202.8 million only narrowly beat analyst expectations of $200.8 million.

“When you get a company that barely beats that has been beating by a longshot, people are going to be scratching their heads a little bit,” said Brian White, an analyst at Drexel Hamilton. “If that guy can’t show much upside, what does that mean for the rest of the sector?”

Then, second, LinkedIn, the business network, shocked Wall Street with a revenue forecast that fell far short of expectations, sending its share plunging 43 percent on Friday and wiping out about $11 billion of market value.

“They’re a proxy for enterprise spend,” said Daniel Ives, an analyst at FBR, adding LinkedIn’s bad results were exacerbating fears around spending among customers of big enterprise companies.

So concerns spilt over into other business services and software firms. Salesforce.com and Workday saw drops of more than 10 percent each. Salesforce’s 13.6 percent loss was its worst one-day loss since October of 2008.

Shares of Splunk Inc, a data analytics software maker, dropped as much as 28 percent.

Investors are questioning whether enterprise customers will be willing to spend money to take advantage of trends like big-data analytics and cloud computing.

Given that companies that rarely previously cited broader economic trends as a possible drag on growth discussed them in recent earnings calls, including Tableau along with iPhone maker Apple , investors are growing concerned about factors such as slowing U.S. job growth, analysts said. [1]

Third, I find the Tableau stock price drop especially interesting with the 2016 Gartner Magic Quadrant for BI and Analytics Platforms annual report released yesterday with Tableau being the second best-rated tool and being the best at their ability to execute.

Jake FreivaldI was scratching my head when I came across an article published today by Jake Freivald, Vice President of Corporate Marketing for Information Builders. Jake is responsible for marketing operations: branding, marketing communications, events, Web marketing, and direct marketing. He graduated from Cornell University with a Bachelor of Science in Electrical Engineering in 1991.

I like Jake’s take on Gartner’s change of direction on how they are going to create the Magic Quadrant for Business Intelligence and Analytics Platforms going forward. I liked it so much, in fact, I have included it in its entirety below for you to read.

So, before I segue into the article, below is an image of this year’s Gartner Magic Quadrant for Business Intelligence and Analytics Platforms. Jake’s article will help explain how some of the mighty have fallen.

Next thing you know, I’ll be reading an article how Amazon.com came up with some kinda button to put next to your favorite consumables where you just press the button when you are running out and it will automatically order you more and deliver it to your door.

Best Regards and thanks Jake for the insights!

Michael

Amazon Dash

 

Gartner 2016 Magic Quadrant BIA

[3]
Jake Article Header

The new Gartner Magic Quadrant for Business Intelligence and Analytics Platforms is out. [2] (Download it here.) Interestingly, it’s a pretty complete break from prior years. How complete? Here’s how they put it:

As a result of this change and the resulting effect on the shape and composition of the BI and analytics Magic Quadrant, historical comparison with past years (to assess relative vendor movement) is irrelevant and therefore strongly discouraged.

And how! They’ve gone from looking at very broad capabilities — that used to include data discovery and other business-user analytics, but also included dashboards and reporting that can be used for broad-scale deployments — to looking only at products that can be used by business users with little-to-no IT involvement.

To understand what’s going on here, it helps to understand Gartner’s attitude toward “bi-modal IT“. Mode 1 is the traditional, IT-governed, stable form of IT delivery. Mode 2 is the exploratory, agile, non-linear form of IT delivery. This quadrant is focused on BI and analytics that supports Mode 2.

Mode 1, which had been important in previous quadrants, is essentially ignored in this one. As a practical matter, it isn’t going away, so one of the more important things to figure out is how to promote mode 2 (exploratory insights found by individuals) to mode 1 (production apps for the many). More on that in a moment.

This narrowing of focus, from mode 1 + mode 2 in previous years to the current emphasis on mode 2, reduces both the use cases and the number of products that can be considered.

Out of a large number of companies who were considered for the quadrant — I can’t find the number in the report, but if I recall correctly, it was something like 87 — only 24 made it onto the chart. Some that were leaders (us included) are in different parts of the quadrant (there are only three ranked as leaders), and some significant vendors didn’t make it into the quadrant at all.

The criteria are so different, in fact, that the only product they ranked out of our entire product line is InfoAssist+, our self-service analytics tool. This product has been around for quite a while and has hundreds of thousands of users worldwide, and includes interactive dashboards, reporting, ad hoc query, and the creation of in-document analytics — but, until about a year ago, it didn’t contain data discovery.

In other words, we’re essentially a start-up in the market niche they’re covering.

Personally, I don’t mind being a start-up in this niche. Most start-ups don’t have the organic strengths that we have: a mature support organization, time-tested technology, more than ten thousand customers, and millions of users.

Moreover, we have strengths in mode 1 that most other vendors in the quadrant don’t have. We can take visualizations, dashboards, queries, reports, and so on from InfoAssist+ and share them with — literally, no exaggeration — hundreds of thousands of users, even millions, inside or outside the corporate firewall. See why other industry analysts position us a leader here.

It’s easier for us to enhance our data discovery technologies than it will be for our data discovery competitors to develop that kind of analytical application power. Which means we see our position as a start-up on this quadrant as an opportunity to move up and to the right over the next few years.

We look forward to working with you along the way. :)

Don’t forget to download your copy of the report.

 


* This leads me to my one real gripe with the quadrant: If they were going to radically change the definitions, I would have preferred that they change the name from “Business Intelligence and Analytics Platforms”. A rose by any other name and all that; nevertheless, a specific petal, no matter how beautiful, doth not a rose make. Despite their warning not to make comparisons, we’re going to see some confusion here.

Sources:
[1] Sarah McBride, Lance Tupper and Saqib Iqbal Ahmed; Editing by Dan Grebler and Meredith Mazzilli, Bing.com, February 5, 2016.

[2] Jake Freivald, Gartner Magic Quadrant for BI and Analytics Platforms, 2016: My Take, Information Builders, February 5, 2016.

[3] , Magic Quadrant for Business Intelligence and Analytics Platforms, Gartner, ID:G00275847, February 4, 2016.

 


Filed under: Analytics, Business Analytics, Business Intelligence, Gartner, Information Builders, Jake Freivald, Magic Quadrant, Tableau, Uncategorized

Storytelling with Data: Our Brains Crave Structure + Love Oddballs

$
0
0
Source: Martha Kang, blogged on datasciencecentral.com, March 5, 2016, Written by Rawi Nanakul & Marnie Morales, http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A392236

Storytelling with Data

We create, interpret, and experience stories every day, whether we realize it or not. Our brains are constantly receiving input and stringing things together in order for us to make sense of the world. While our brains create countless stories, only the few great ones stay with us. These make us cry, laugh, or embrace a new perspective.

Understanding how our brains interpret the world can help us become better storytellers. That’s where neuroscience comes in. The field of neuroscience covers anything that studies the nervous system, from studies on molecules within nerve endings to data processing, to even complex social behaviors like economics.

Take the Reader from the Known to the Unknown

So let’s put our brains to the test. Take a look at the photo below for a few seconds. What do you see?storytelling_boxing_match

We know very little about this scene. But because our brains crave structure, we still try to see the story. We take things we know—boxing gloves, children, and a corner man—and try to infer what the unknown might be.

A good story takes us from the Known to the Unknown. This simple premise is the key to telling stories for the brain. Let’s apply this concept to a comic. Why a comic? Comics are similar to data stories in that they present a sequence of panes containing different data points that lead you through a story.

storytelling_comic
Credit: xkcd

Known:

Election year is coming up.
The common joke of “if X wins, then I am leaving the country.”

Unknown (Punchline):

Dying in Canada = real.
Canada is the matrix.

What did we do in the course of reading the comic? We’re going to look at some basic brain anatomy to understand what our brain does when reading something like this.

Good Stories Activate More Parts of Our Brain

As you look at the comic, the prefrontal cortex in your frontal lobe kicks into gear, and your brain’s cognitive control goes to work. You’re also processing data that comes into your brain as visual input. From your eyes, that data is sent to the primary visual cortex at the back of your brain and onward along two processing streams: the “what” and the “where” pathways.

storytelling_brainThe “what” pathway (in purple) uses detailed visual information to identify what we see. It pieces together the lines and figures that add up to the comic’s characters. It also recognizes the letters and words, and helps deciphers their meaning with the help of additional cortical regions like Wernicke’s Area, a part of our language system.

The “where” pathway (in green) processes where things are in space. We know this data stream is important and active during reading because adults with reading disabilities like dyslexia often have disrupted functioning of this pathway.

So when we’re interpreting visual information, we’re activating quite a bit of our brains to make sense of the data we’re presented.

Things get more complex from there, because as we interpret the stories we see, even more brain areas become active. Part of the way we comprehend stories is through a simulation of what we see. So you can potentially activate parts of your brain involved in motor control or your sense of touch.

And imagine if you connect emotionally to the story you’re reading. You’ll be activating areas of your brain involved in emotion (the limbic system). So when reading a good story, whether it’s prose, a comic strip, a data-driven story, you have the potential to get almost global activation of your brain. And the most impactful and memorable stories are those that engage us most.

Channel Your Inner Oddball

storytelling_sticks

Now that we know some of the anatomy, let’s look at the behavioral applications of what we know. Take a look at the stick figures above and read them from left to right. Which one is not like the others? We can quickly see which figure is out of place. Our eyes jump right to it.

How did we know which one was the oddball figure without anyone telling us what it looked like? We had already established a baseline that our initial figure was the normal figure. And when the outlier was presented, we knew right away that it didn’t belong.

This experiment is a common attentional process test called the oddball paradigm. A baseline is presented through repetition, then an oddball is presented. This should remind you of our Known-to-Unknown formula that I mentioned earlier. By creating a strong baseline, when the oddball—or an unexpected twist or climax—occurs, we are prepared for it and enjoy it.

Our brain is processing the information based on our experience of the information input. Below is a figure of an ERP, or event-related potential. ERPs are averaged waveforms that measure electrical activity from your scalp. We can use them to measure reaction speed to attentional processing.

storytelling_graph2
Olichney, Nanakul, et al. 2012

In the left figure above, we see our brains when presented with standard stimuli (each tick mark is 100ms). You see that we have relatively flat lines after the initial peak. The flat lines are expected because standard stimuli are essentially noise, and our mind zones out because it has been normalized.

The figure on the right shows the oddball—or target—tone with a peak of 300ms (also known as a P300). This peak is from our brain detecting the oddball and concluding that this is the item to pay attention to. This peak is only possible through having established a clear baseline.

 

What This Means for Storytelling

The example above shows us we have to lay down a good foundation and logical progression to get to our peak. Without structure, our audience will experience our story as noise and tune out, like our figure on the left.

When creating your own stories, remember that the brain craves structure and loves oddballs. The brain processes information by taking information it already knows to infer what a new piece of information might be. Therefore, making it easy as possible for the brain to understand the story is key to delivering a successful climax or twist.

Now that you have some basic understanding of brain anatomy and neuroscience, try applying the lessons learned to your data stories. Create dashboards that engage the senses through pleasing designs, shapes, color, text, and interactivity. Embrace the oddball paradigm by clearly establishing a baseline before delivering your findings. That way, the audience’s mind will be primed to attend to it. And their brains will help them remember your story as one of the few good ones.

To learn more about storytelling with data, visit the Tableau blog


Filed under: Brain, Data Science Central, Data Visualization, Storytelling, Tableau, Uncategorized

Tapestry 2016: Diagram Showing Relative Popularity of Women’s Weapons

Cognos, RAVE and D3: An Interview with Cognos Paul

$
0
0

Paul Mendelson HeadshotWithin the worldwide Cognos community, when you ask someone who to turn to about some special trick or complex feature you need to implement, the first name that comes out is Cognos Paul (Photo, right).

Paul Mendelson, aka Cognos Paul, is a certified freelance Cognos developer. He has been working, tinkering, and playing with Cognos since 2007.

For most of his professional career, Paul has consulted on projects from a wide array of companies. While sometimes difficult, especially as a project comes to a close, this has given him the opportunity to learn from a wide range of methodologies spanning many industries. Paul’s clients have included banks, pharmaceutical companies, government and military organizations, institutions dealing in manufacturing, logistics, insurance, and telecoms (the list goes on). Without the opportunities of working for these clients, Paul feels he would not know half of the techniques half as well as he should, and would like Cognos half as much as it deserves.

If you have a challenging Cognos question and are seeking help, you can contact Paul at cognospaul@gmail.com and can come to an arrangement.

1. IBM recently released its latest version of Cognos (rebranded as Cognos Analytics). Can you tell us your thoughts on the new release and also how Watson Analytics plays a part in it?

Cognos Paul: The new version seems heavily self-service centric, with the advanced dashboarding tools and the various improvements to Workspace Advanced. The most exciting part of it is the expanded data sources used to power the new dashboards. It should make self-service dashboards must easier for users to build. The caveat is that, as usual, it is a complex tool. Users will absolutely need to be trained, or Cognos IT group will be swamped with issues.

Watson is a drastically new direction. While I haven’t played with it much myself, it seems to make statistical analysis open to non-statisticians. I still have some reservations, but I’m looking forward to seeing more.

2. From a data visualization perspective, why would I want to consider Cognos Analytics versus, say, a Tableau, Microsoft Power BI or MicroStrategy?

Cognos Paul: Data visualization is actually one of Cognos’s historically weak spots (although they are working on it, with RAVE). I believe Tableau still maintains the standard of advanced dataviz capabilities. That being said, the other capabilities offered by Cognos more than makes up for it.

The flexibility of report design and ways end users can consume the reports is without compare.

Cognos Analytics

3. MicroStrategy recently released their v10.3 which offers an integration to a D3.js library. Is Cognos Analytics doing anything similar?

Cognos Paul: Several years ago, IBM released a tool cared Rapidly Adaptive Visualization Engine, or RAVE. Using a declarative language, the author is able to easily and quickly build very advanced graphs. Users can define the graph to modify shape, size, color, opacity, based on any elements in the data set.

Admittedly, RAVE doesn’t offer the complexity of D3, which is why D3 is integrated in one of the upcoming versions. From what I understand, RAVE will be able to use almost any publicly available D3 library.

4. What advice would you give a developer who is new to Cognos?

Cognos Paul: Taking a Framework Manager (FM) course is absolutely necessary. The best practices for framework model development make sense, and deviating from them can cause performance issues.

Report development does take some time to get the hang of. There are always multiple ways of doing things, and if you’re working hard, you’re doing something wrong. Try multiple things and don’t be afraid to ask questions.  Google Search is your friend, if you’re having a problem with something, then chances are good that other people have as well.

Most important, don’t be afraid to try new things. There are many things Cognos can do, and many associated tools. If something isn’t working one way, there is a very good chance it will work another.

 


Filed under: Cognos Paul, Dataviz, IBM Cognos, IBM RAVE, Microsoft Power BI, MicroStrategy, Tableau, Uncategorized

Guest Post: Tableau and Backwards Compatibility

$
0
0

Readers:

Ken BlackToday, I am going to divert from my normal blog posting and share a guest post from my co-worker, Ken Black (photo, right).

Ken uses Alteryx, Tableau and other analytical techniques to investigate data. He has significant experience working with large data sets from both business and scientific projects that spanned from millions to several billion records. Ken is skilled in data analysis, computer programming, trend modeling, data mining, and discovering unknown aspects of business performance in large data sets. Also, he is adept at discovering cause and effect relationships hidden within historical data, including finding previous business successes. He has experience in multi-language computer programming and debugging, extracting and transforming data into great visual output, and teaching and explaining complex concepts.

He had a technical blog at 3danim8.wordpress.com.

Here is Ken’s guest blog.

Thanks, Ken.

Regards to all!

Michael

————————————————————————

Introduction

I definitely have done my fair share of Tableau hacking through the years. I normally don’t publish what I do to modify the XML commands inside the *.twb file, however, because the methods are not always guaranteed to work. I also do not want to impact the spirit of Tableau – which is to keep things simple, safe and reproducible. I also do not want to cause any problems for anyone in their work.

Unless the user has a fair amount of XML experience and is really comfortable with making changes, altering Tableau *.twb files can be a dangerous task to undertake. It is really easy to corrupt a *.twb file. So if you decide to try this one, be sure to make a backup file of your original *.twb file before making any changes. You can now consider yourself forewarned.

Motivation

Sometimes I decide to break the rules a little when there is good reason. In the past couple of weeks, I have been asked the same question on a number of occasions, which is why I am motivated to write this article.

The question is this: Can I make a version 9.3 Tableau file (*.twb) backwards compatible so that it works with Tableau Desktop 9.2?

This question has arisen for a couple of reasons. First, some users popped forward to Desktop 9.3 when the Tableau server version they were using was 9.2. They did a lot of work in 9.3 only to find that they couldn’t publish those workbooks/dashboards on the version 9.2 server. Also, some people are participating in Tableau Beta testing and they had some work completed in version 10 that they wanted to bring back to version 9.2 for the same reason.

Since I had to show a few different people how to do it, I thought I’d write a quick note to share the technique.

Backwards Tableau Compatibility

I’m not on the Tableau development team, but I’ve written enough software to understand the issues with compatibility. In fact, I wrote an extensive XML schema to verify input into a very sophisticated model that integrated groundwater and surface water flow. Having experience like that allows me to readily understand the XML files developed by Tableau.

When Tableau releases a new version, say going from version 9.2 to 9.3, this usually means that new features are being added to the software. What this means is that new XML fields (parents and children elements) are added to the the software platform and these elements are written to and stored in the *.twb file if the user has activated them in their Tableau workbooks.

When a user implements a new feature or object in 9.3, for example, it really is not possible to go backwards to version 9.2. The reason for this is obvious – the software feature described by those XML tags were not available in 9.2.

If you try this, the Tableau 9.2 XML parser will not understand the meaning of those elements (because they are not in the XML schema) and you will get some error message. The message will inform you that there is a problem reading the *.twb file. It has been a long time for this to have happened to me, so I can’t remember exactly what the error messages look like.

To summarize, if you try to migrate a 9.3 file that has new features back to 9.2, you will not be successful. The technique I am about to show will only work if the features you have in your workbook were available (and unchanged in definition) in the version you are trying to revert to. The good news is, in many cases, backwards compatibility will be possible.

The Backward Compatibility Technique

For a lot of the Tableau work we do, we probably have not implemented the newest features in our workbooks when a new release is issued. It normally takes us some time to discover these things and begin using them. So if you find yourself in this situation, there is hope that you can use this technique successfully.

Figure 1 shows the first 12 lines of a version 9.3 *.twb file I randomly picked from my working files. If I try to open this file in version 9.2, I get the error message shown in Figure 2.


twb XML Code
Figure 1 – A *.twb file from Tableau version 9.3. I want to move this backwards to version 9.2. Click the picture for an exploded view to see the text details.

twb Error Message
Figure 2 – The error message you receive when trying to open a newer *.twb in an older version of desktop.

My goal is to push this 9.3 *.twb file backwards to version 9.2 so I can publish it on a 9.2 server. There are only two things I need to do. Please refer to Figure 1 to see the content of the line numbers I refer to below.

Steps
  1. Swap out lines 3 and 4 and replace them with the equivalent lines in a version 9.2 file.
  2. Change the version number from 9.3 to 9.2 in line 10.

After the changes are made, my 9.3 file now appears to be a 9.2 file, as shown in Figure 3. The Tableau XML parser will read the file and believe that it was created in version 9.2. If you do not have any new features in the *.twb file that originated in version 9.3, you will be fine and the file will render in Tableau.

twb XML Code - 2
Figure 3 – A *.twb file that was originally created in Tableau version 9.3 but has been updated to look like a version 9.2 file. Click for exploded view to see the details.

Since I am a Tableau Beta tester, it is not uncommon for me to have multiple versions of Desktop installed. For this reason, I have had to use this technique recently to convert some version 10.0 files back to version 9.2 for publishing. It is easy to make a mistake of picking a more recent Desktop version only to realize later on that you need to publish your work in an older version on Tableau server or Tableau public.

Multiple Data Sources

If you happen to have multiple data sources in your workbook, you will have to make changes like those described above (#2) to make sure all the datasources appear to be version 9.2. You can search the *.twb file to find the word “version” to find these other occurrences and then make the required changes.

Final Thoughts

Altering the Tableau workbook files (*.twb) can be dangerous, so users should beware! Having the knowledge to do so is a benefit, however, and can be used to get you out of a jam some times and can help you avoid having to rebuild a workbook and dashboards in the earlier version. This technique may save you some serious time one day.

Update – The Next Morning

I awoke today (6/10/15) to a Linked-In update that made me laugh. Yesterday another article was published on this same topic on the Tableau Blog. The author is the incredible Jeffrey Shaffer, and he had to kick my a$$ by publishing a web-based tool that you can use make these changes for you!

Click here for a link to this article on his awesome website. Several days later, Jeffrey wrote another article that explains in detail how his tool works. Click here to read that article.

The Story Behind the Article

Once I saw this Jeff’s work, it got me thinking about what motivates me to write articles like this. To help you understand how things like this come into existence, I’m going to show you the story behind the article.

Early on 6/8/16, a colleague of mine in Arizona wrote me a question as shown in Figure 4. His name is Michael Sandberg and you must check out his awesome and beautiful data visualization blog that focuses on Infographics.

original_question
Figure 4 – The original question from Michael Sandberg regarding version compatibility.

I wrote him a response later in the day as shown in Figure 5.

my-response
Figure 5 – My original explanation to him that was expanded into this article.

Sometime after that, he tried the method and it worked for him as shown in Figure 6.

Ken_reply
Figure 6 – Michael’s kind words were motivation for me to write this article.

Those simple three words Michael wrote to me inspired me to write this piece.  That is how this stuff happens. So thank you Michael for your kind words and your great work on your blog.


Filed under: Alteryx, Data Visualization, Dataviz, DataViz Tip, Ken Black, Tableau, Uncategorized

Tableau Iron Viz Winner 2016: Political Polarizaton in the US

$
0
0

[Click on Image to be redirected to Tableau Public]

IronViz Winner 2016
Political Polarization in the US

Originally Published on: Interworks.com

With this visualization of Pew Research Center’s study on political polarization in the United States, Robert Rouse is the winner of our second Iron Viz Feeder on Politics.Navigate between the different story points using the arrows at the top. Slide the cursors on page 2 to get your own ideological score. Use the top right filter to see how various segments of the US population break down on 6 key topics on page 3.

Filed under: Dataviz, Infographics, InterWorks, Iron Viz, Politics, Tableau, Uncategorized

Tableau Public: Who Lies The Most? The 2016 Presidential Election

$
0
0
who-lies-the-most

UPDATE – October 23, 2016

Readers:

I have updated my data from the latest data on Politifact.com for my interactive version of this chart that is published on Tableau Public.

Thanks,

Michael

Readers:

Back on July 24, 2016, I blogged on my data visualization web site a stacked bar chart titled Who Lies More?, created by Robert Mann, answering the question which politicians lie more. His chart was based on data from PolitiFact.

I have received a lot of site traffic based on posting Robert’s chart. I was interested in developing a data visualization for Tableau Public, so I went out and extracted fresh data from PolitiFact.com for the Executive Branch (e.g., President Obama, VP Joe Biden), our congressional leaders on both sides of the aisle, and the five major political parties who have selected a candidate to run for president (and vice president).

For my Tableau workbook, all fact counts were taken from the Politifact Web Site starting September 9, 2016. The date each candidate’s facts were extracted can be seen when you hover over a bubble on the Who Lies The Most Dashboard and view the tooltip information.

Click on the image below to go to the interactive version in Tableau Public.

who-lies-the-most-dashboard

All headshot photos of the candidates were taken from the Politifact Web site or Wikipedia. I tried to ensure each candidate has a smile on their face versus a frown or angry look. I did this to try and remove any bias associated with the photo.

The size of the bubbles is related to the total number of facts available for that candidate on the Politifact Web Site. If a candidate currently has less than 18 facts, I did not include them in the bubble matrix mix, but did show them in the upper left  corner of the dashboard with the total number of facts each of them had. As new facts are added to their count in Politifact, I will update them accordingly and integrate them into the matrix mix when they each reach 18 or more facts.

The color of the bubbles was determined as follows:

Dark Red      > 67% False

Red                > 57% False

Orange          > 52% False

Yellow            Approximately 50-50

Light Blue     > 52% True

Blue               > 57% True

Dark Blue     > 67% True

This is a “first cut” of this Tableau Workbook. It needs some more work and I have additional changes I want to make over the next few weeks, and will need this amount of time as I have some hot projects going on at work.

If you find any errors or omissions in any of the charts provided, it was unintentionally and in no way was meant to make a certain candidate look better or worse. Please e-mail me at michael@dataarchaeology.net with your suggested corrections and I will make those that are fair and reasonable.

I would also like to hear your suggestions for making this workbook as fair and accurate as possible. And, no, I cannot influence PolitiFact on their methodology or how they report their data.

Thanks for stopping by.

Best regards,

Michael

 


Filed under: Data Archaeology, Data Visualization, Interactive Data Visualization, Political DataViz, Politics, PolitiFact, Tableau, Uncategorized

UPDATED: Who Lies the Most? 2016 U.S. Presidential Election

$
0
0
tableau-public-cover-page-2016-10-02

UPDATE – October 23, 2016

Readers:

I have updated my data from the latest data on Politifact.com for my interactive version of this chart that is published on Tableau Public.

Thanks,

Michael

UPDATE: October 2nd, 2016

Still no data on Jill Stein. What is up with that, Politifact? Just a few more facts added to the counts of President Obama, Secretary Clinton and Mr. Trump, but nothing really Earth shattering. I think there was one new fact for Gary Johnson.

Any candidate that I updated today has a refresh date of 10/02/2016 in their tooltip.

I was sent an e-mail question from Ben in Alaska (not sure he wants his last name mentioned, so I will just call him Ben), He sked me what kind of weighting factor I used in my calculations. Actually, I don’t use any since I felt it might skew the numbers unfairly. So, for example, if a candidate has 10 facts categorized as Pants on Fire, and has 100 facts total, then 10% of their facts were for Pants on Fire. I did not want to inflate perception for any candidate that may make more false leaning statements. I wanted the numbers to speak for themselves.

However, with that said, it begs the question of what kinds of facts are used in the evaluation. For example, if I ask a candidate 5 times if he/she has a dog, and he/she said “Yes” all five times, this will raise their truthful score. But, are these the kinds of “truths” we want to include in the calculations. I mean, these are presidential candidates, so I think the facts should raise questions related to their past activities in government (such as how they voted, what kind of legislation did they support) or running a large business (were your businesses profitable, how do you treat your employees or contractors, do you discriminate by gender or color?). Or, if they are asked if they know what Aleppo is and they reply “What’s Aleppo?” as Gary Johnson did, I find that to be an honest answer although it did not project him having strong knowledge of the war in Syria or the geography of Syria. I am not picking on Mr. Johnson here, but it might be the case that he is steeped in knowledge about Syria and the war itself, why it occurred, and how he thinks we can resolve it. Again, a lot of questions can be raised from his quizzical answer to that question. I give him kudos for his honestly. I think some of the other candidates might have tried to BS their way out of answering the question if they did not know the answer.

Anyway, some food for thought. I admit I need to study Politifact’s methodology deeper to find the answers to the questions I have about how they determine what a fact truly is.

If you find any errors or omissions in any of the charts provided, it was unintentionally and in no way was meant to make a certain candidate look better or worse. Please e-mail me at michael@dataarchaeology.net with your suggested corrections and I will make those that are fair and reasonable.

Thanks to all of you who provided comments on my blog. I am also interested in your thoughts about what constitutes a fact? I hope you drop me a line.

Thanks,

Michael

Copyright (c) 2016, Michael S. Sandberg, Data Archaeology, Inc. All Rights Reserved.


Filed under: Interactive Data Visualization, Interactive Visualization for the Web, Political DataViz, Politics, PolitiFact, Tableau, tableau public, Uncategorized

Calling Bullshit on the 2017 Gartner Magic Quadrant for BI & Analytics Platforms

$
0
0

gartner-bianalytics-platforms-2017

Readers:

Around this time every year, friends and colleagues start e-mailing me images (see above) of the 2017 Gartner BI and Analytics Magic Quadrant (referred to as the MQ). In the past, when I worked extensively with MicroStrategy, my colleagues would complain how IBM (nee Cognos) was ranked ahead of it. For the last two years, my friends in the Tableau community are scratching their heads on how Microsoft Power BI ranked ahead of Tableau in completeness of vision. Grumbles throughout the BI & Analytics industry range from “The Gartner folks don’t know what they are talking about” to “well, you know, Microsoft owns a piece of Gartner.

*Sigh*

Personally, I have had excellent conversations in the past with Kurt Schlegel and Cindi Howson from Gartner. I found both of them to be fairly reasonable and honest people.

cindi-howsonToday, Cindi Howson (photo, right) posted on the Gartner Blog, Biggest Mistakes To Avoid When Reading the Magic Quadrant. In this post, Cindi acknowledges both the anxiety and anticipation BI & Analytics industry folks had waiting for this year’s MQ to be published.

However, Cindi notes that there are some key mistakes people make when viewing the MQ image by itself.

Assuming “ability to execute” is about ability and future ability.

Per Cgartner-bianalytics-platforms-2017-ability-to-executeindi, the placement along the Y- axis (see example to the left) is labeled Ability to Execute (A2E). She says you should mentally visualize this as a combination of product capabilities, financials, and operational excellence. Strictly looking at how a vendor did in the previous year is not indicative of how much “ability” they will have in the future. Also, a mediocre product that rates well in other areas, may end up with a high placement on the A2E axis. Also, you could have a great product with terrible customer support which still has a high end result.

Cindi states, “Being average across the board rarely lands a vendor in the top half of the Quadrant, whether Leader or Challenger.”

Assuming “completeness of vision” is only the product roadmap.

gartner-bianalytics-platforms-2017-completeness-of-vision-top

gartner-bianalytics-platforms-2017-completeness-of-vision

Cindi notes, “The placement along the X-axis (see example above) does include the vendor’s vision, or roadmap, but it also includes a number of other factors such as market understanding as well as strategy on marketing and vertical solutions.”

Even if a vendor has a great roadmap, if they are not exceeding in other strategic factors, they will often end up in the Niche or Challengers Quadrant.

The Gartner BI and Analytics MQ team defines “market understanding” as a combination of ease of use and complexity of data and analysis, because that is what is driving new buying requirements.

gartner-bianalytics-platforms-2017Looking only at the picture.

A picture is worth a thousand words” is an English idiom. It refers to the notion that a complex idea can be conveyed with just a single still image or that an image of a subject conveys its meaning or essence more effectively than a description does.

Notice the image on the right. It just shows the dots. Pretty meaningless without more information. Right?

Cindi talks about needing to read the fine print while viewing the MQ. She cites Tables 1 and 2 of this MQ (I have provided screenshots of them below).

Cindi notes, “When something doesn’t make sense to me or when the model first generates the graphic, I have to remind myself of the six to eight drivers that go into each axis.”

gartner-ability-to-execute-criteria-tablegartner-completeness-of-vision-criteria-table

Using only the MQ.

Cindi states it best when she says, “If you rely only on the MQ to set your BI and analytics strategy, you are making a mistake. If you only look at the Leaders, you also are making a mistake; the best vendor for your particular requirements— short term and long-term— may be in another quadrant or not in the MQ at all. The MQ is just one resource. We have the companion Critical Capabilities which focuses on the product only (new version due out soon), the market guides, the cool vendors, and so many toolkits.  Use the full body of research when buying products and setting strategy. Better yet, set up an inquiry call so we can guide you through the process. It’s what we are here for, and there are a lot of us!”

I hope you found Cindi’s comments helpful in how you view the MQ. I don’t always agree with the Gartner findings, but they put a lot of effort in creating these every year and it is at least a solid baseline for us to do our own objective evaluation of the tools based on our business requirements. And remember, A fool with a tool is still a fool.

Best Regards,

Michael

Sources:

[1] Rita L. Sallam, Cindi Howson, Carlie J. Idoine, Thomas W. Oestreich, James Laurence Richardson, Joao Tapadinhas, Magic Quadrant for Business Intelligence and Analytics Platforms, Gartner, G00301340, February 16, 2017, https://www.gartner.com/document/3611117?ref=unauthreader&srcId=1-4554397745.
[2] Cindi Howson, Biggest Mistakes To Avoid When Reading the Magic Quadrant, Gartner, February 23, 2017, http://blogs.gartner.com/cindi-howson/2017/02/23/bia2017mq/.
[3] –, Who Owns Gartner?, InformationWeek, October 31, 2003, http://www.informationweek.com/who-owns-gartner/d/d-id/1021577?page_number=1.
[4] –, A picture is worth a thousand words, Wikipedia, https://en.wikipedia.org/wiki/A_picture_is_worth_a_thousand_words.

Filed under: Cindi Howson, Gartner, IBM Cognos, Kurt Schlegel, Magic Quadrant, Microsoft Power BI, MicroStrategy, Tableau, Uncategorized

Tapestry Conference 2017: 10 Data Storytelling Videos

$
0
0

Readers:

The 5th annual Tapestry Conference was held on March 1st, 2017. Over 100 invitees from journalism, academia, government and both the non-profit and for-profit private sectors gathered at the Casa Monica resort in St. Augustine, Florida to discuss the emerging discipline of data storytelling.

Tapestry 2017 LogoClick on the Tapestry logo, to the right, to be redirected to the ten presentations from the one-day event.

Enjoy!

Michael

 


Filed under: Data Visualization, Infographics, Storytelling, Tableau, tableau public, Tapestry Conference, Uncategorized

Review – Part 1: MOOC, “Data Exploration and Storytelling: Finding Stories in Data with Exploratory Analysis and Visualization”

$
0
0

Readers:

Recently I participated in the MOOC, Data Exploration and Storytelling: Finding Stories in Data with Exploratory Analysis and Visualization which was taught by Alberto Cairo and Heather Krause. This course was a MOOC-style course from the Knight Center for Journalism in the Americas at The University of Texas – Austin.

Knight Center

I have previously attend two other MOOC courses taught by Professor Cairo and gained a lot of knowledge from those courses as well as inspiration and excitement in my work environment.

I really, really loved this course and wanted to provide some feedback and information for those of you considering taking this course (or a future course) from Alberto and Heather.

I like to ruminate about what I will write versus rushing to get it all in one long blog post, so I will be doing this as several blog posts. Please have patience with me as I want to provide as much information and thoughts as possible about why this was a great course.

I need to, for full disclosure, state that I am not a journalist. I am a Business Intelligence Data Architect by title, but really consider myself a seasoned programmer (with lots of experience in many programming languages, scripting languages and BI Toolsets). Currently, my primary BI toolset is Tableau Desktop v10.2.

I hope you find this review helpful and I highly encourage you to take courses not only from them, but other offerings in the MOOC space.

Best Regards,

Michael

Goals and Objectives

Goals

The primary goal of the course was to introduce the participants to ways to use data as a source to tell stories. Throughout the course, Mr. Cairo and Ms. Krause demonstrated, via the videos, the tools and techniques they commonly use to interrogate data for answers – gathering, cleaning, organizing, analyzing, visualizing and publishing data to find and tell stories.

I will touch on some examples of these tools and techniques throughout the series of blog posts for this review.

Objectives

The objective of the course was for the participants to come away with knowledge about the following topics.

• How to find data
• How to understand the data you want to work with
• How to build stories with data using several variables or pieces of data at the same time
• How to implement best practices around ethics and data

Course Format

This course is referred to in the syllabus as “an asynchronous course.” Again, per the syllabus, “that means there are no live events scheduled at specific times. You can log in to the course and complete activities throughout the week at your own pace, at the times and on the days that are most convenient for you.” [1]

Courses like this are referred to as a MOOC. A Massive Open Online Course (MOOC) is an online course aimed at unlimited participation and open access via the web. In addition to traditional course materials such as filmed lectures, readings, and problem sets, many MOOCs provide interactive user forums to support community interactions among students, professors, and teaching assistants (TAs). MOOCs are a recent and widely researched development in distance education which were first introduced in 2006 and emerged as a popular mode of learning in 2012. [3]

In the poster below, titled “MOOC, every letter is negotiable”, explores the meaning of the words “Massive Open Online Course”. [4]

MOOC_poster_mathplourde

Instructors [2]

AlbertoCairoAlberto Cairo is the Knight Chair in Visual Journalism at the University of Miami. He’s also director of the Visualization program at UM’s Center for Computational Science. For more than a decade, Cairo was an infographics director at publications in Spain in Brazil, like El Mundo and Globo magazines. Today, besides being a professor, he works as a consultant and designer for news organizations and for companies like Google and Microsoft. He’s the author of the books The Functional Art: An Introduction to Information Graphics and Visualization (2012), and The Truthful Art: Data, Charts, and Maps for Communication (2016).

HeatherKrauseHeather Krause is a data scientist with years of working on complex research problems in the social, non-profit, and data journalism sectors. She is passionate about helping people understand and use the best practices and tools to transform data into rich stories. Heather has worked on many complex stories that mix math, science and creativity into comprehensive narratives and data journalism pieces. As the founder and president of Datassist she has worked with FiveThirtyEight, Orb Media, the Bill and Melinda Gates Foundation, Syrian Refugee Resettlement Secretariat, and many more international non-profit organizations.

Course Content

The course consisted of six modules. Each module (videos, reading, exercises, discussion forum, quiz) was expected to be completed in a week. The six modules were:

Module 1: Finding and Understanding Data

Module 2: Character Development for your Data Story

Module 3: Basic Plot Elements of Your Story

Module 4: Advancing the Plot of Your Story

Module 5: The Plot Thickens in your Data Story

Module 6: Putting the Data Story Together

Next Blog Post: Review of each of the modules

Sources:
[1] Alberto Cairo and Heather Krause, Course Syllabus: “Data Exploration and Storytelling: Finding Stories in Data with Exploratory Analysis and Visualization“, Knight Center for Journalism in the Americas, The University of Texas – Austin, January 16–February 26, 2017.

[2] Alberto Cairo and Heather Krause, Biography Photos and Introduction, Knight Center for Journalism in the Americas, The University of Texas – Austin, DES17 – Introduction, http://journalismcourses.org/course/view.php?id=47&section=1.

[3] Wikipedia.com, Massive open online course.

[4] Mathieu Plourde, “MOOC, every letter is negotiable”, April 4, 2013, licensed CC-BY on Flickr, http://www.flickr.com/photos/mathplourde/8620174342/sizes/l/in/photostream/.


Filed under: Alberto Cairo, Analytics, Data Visualization, Heather Krause, Infographics, MOOC, Storytelling, Tableau, tableau public, Uncategorized
Viewing all 292 articles
Browse latest View live