Import the Geckoboard package and create an instance of the client using your API key:
importgeckoboardclient=geckoboard.client(API_KEY)
You can make calls to the Datasets API with whichever method you usually use to make HTTP requests, but Geckoboard offers client libraries that make interacting with the API even simpler.
Switch the programming language of the examples with the tabs in the top right. By default, the Datasets API Docs demonstrate using cURL to interact with the API over HTTP.
If you're on a Unix based OS (Mac, Linux), you likely have cURL installed on your machine (use the curl -V command in your terminal to confirm). Windows users can access the Command Prompt by searching for Command within Cortana.
We’ll be using the Geckoboard Node.js library to make a simple Node.js app. Node.js version 4+ is required.
Authenticate and test your account when using the Datasets API by including your personal API key in the request.
If you missed including the colon : or are still asked for a password, hit Enter in your terminal.
Verify that your API key is valid and that you can reach the Geckoboard API with the ping method.
If you're seeing the error undefined method `client' for Geckoboard:Class (NoMethodError), you may have multiple geckoboard gemfiles. To clear them, use the command 'gem uninstall geckoboard', then use 'gem list' to confirm and rerun the ping test command.
Verify that your API key is valid and that you can reach the Geckoboard API with the ping method.
When you’re adding a dataset widget to your dashboard, we’ll look at your schema and present the visualization options that make sense for the types of data you’re sending us. For example, to plot a line chart the dataset must contain the date or datetime types.
Visualizations are powered by individual datasets, which means you can't combine data from two or more datasets to build a visualization
Geckoboard can handle data aggregation and grouping, so there’s no need to pre-aggregate your data. And when an update is received via the API, all the widgets powered by that dataset are then updated automatically.
The Datasets API currently supports the following types:
datetime fields must be formatted as ISO 8601 strings, the International Standard for the representation of dates and times.
We recommend you use the YYYY-MM-DDThh:mm:ssTZD variation, which will produce values that look like 2018-01-01T12:00:30Z (1st January, 2018, 12:00:30 pm, UTC).
A datetime field can be NULL if set as optional.
Element
Description
Notes
YYYY
Four-digit year
MM
Two-digit month
Use leading 0 for 1-9
DD
Two-digit day of month
01 through 31
hh
Two digits of hour
00 through 23. 24-hour clock only.
mm
Two digits of minute
00 through 59
ss
Two digits of second
00 through 59
TZD
Time zone designator
Use Z for UTC or +hh:mm or -hh:mm. A time zone offset of +hh:mm or -hh:mm indicates that the date/time uses a local time zone which is hh hours and mm minutes ahead of or behind UTC.
money fields represent a certain amount of money in a single currency. You can specify the currency when defining the field using the currency_code option. This option accepts three character currency codes defined by the ISO 4217 standard. Currency codes should always be in uppercase.
Records should specify the amount of money in the currency’s smallest denomination, as an integer. For example, the USD’s smallest denomination is the cent, so a USD field would specify $10.00 as 1000.
Regular decimal values (e.g. 10.24) can be used in number fields.
For some types of decimal values (like software versions 5.1234), as well as other characters like dashes - and brackets () (used for telephone numbers (555) 555-1234), you may need to use the String format format instead.
Append will add new records to OR modify the already existing records within your dataset. It calls the POST method.
If you haven’t included a unique_by array with your dataset definition, then all new records will be appended to the existing contents of your dataset.
If you have included a unique_by array of fields, then any conflict between your new and existing records will be resolved by merging your updates into the contents of your dataset. This can be used to modify existing records in case their values have changed since your last update or if you want to fix an incorrect record.
Should the number of records in your dataset exceed the 5000 records limit following an Append, old records will be discarded.
Attribute
Description
data
An array of objects with key + values representing a record in your dataset.
Replace will delete all the existing data within the dataset and then write the new data. In effect, your dataset will contain only the new records that you just pushed (you can think of it as similar to an overwrite action). It calls the PUT method.
If you're seeing the error uninitialized constant DateTime (NameError), add require 'date' at the beginning of your file, or before the command if you are using Interactive Ruby (IRB).
Attribute
Description
data
An array of objects with key + values representing a record in your dataset.
Your particular setup and use case will largely determine which method you use, but using append (in combination with unique_by and delete_by) is nearly always preferable as it's quicker, provides better performance, and lets you send more data to Geckoboard.
Consider append when:
You’re always getting data for right now from your data source and want to build up a historical record in your dataset.
You’re planning to push data more than once every few minutes – even if you’re updating a dataset that contains only a single record.
If you want to push more than 500 records to your dataset. For this you’ll need to send multiple appends.
Consider replace when:
You’re not interested in maintaining historical data or displaying a comparison.
You’re able to pull the complete data for the entire time period you’re interested in.
Wipes clean all the existing data in a dataset by passing an empty array via the PUT method (i.e. Replace) and leaves behind an empty dataset. The dataset itself and its schema will be preserved though.
varAPI_KEY='your_api_key';vargb=newGeckoboard(API_KEY)constdataset=gb.defineDataset({{id:'sales.by_day',fields:{quantity:{type:'number',name:'Number of sales'},gross:{type:'money',name:'Gross value of sales',currency_code:"USD"},date:{type:'date',name:'Date'},name:{type:'string',name:'Name'}},unique_by:['date','name']},})try{awaitdataset.create();awaitdataset.replace([{date:'2016-01-01',quantity:819,gross:2457000,name:"one"},{date:'2016-01-02',quantity:409,gross:1227000,name:"two"},{date:'2016-01-02',quantity:415,gross:1229523,name:"two"},{date:'2016-01-03',quantity:164,gross:492000,name:"three"}])console.log('Dataset created and data added');}catch(err){console.error(err);}
The dataset, when checked, contains only 3 rows, despite sending 4. This is because one counts as a duplicate under those conditions. Only the later one is kept.
You can specify multiple field names as part of unique_by as long as they are string, date or datetime fields and have unique identifiers.
Limits and quotas
API rate limit
"error": {
"message": "You have exceeded the API rate limit of 60 requests per minute. Try sending data less frequently"
}
}
There is basic rate limiting on the API. This restricts you to 60 requests per minute for your API key.
If you exceed your limit, the API will return a 429 TOO MANY REQUESTS status and error message.
Records per dataset
Each dataset can contain up to 5000 records.
When a dataset exceeds the record count limit the oldest records (by insertion time) will be removed. This behaviour can be overridden by using the delete_by option when appending new records.
When set to the name of a date or datetime field, the delete_by option will be used to order your records (from newest to oldest) before records are truncated from the dataset.
If you specify a date field for delete_by then the datasets API will try to avoid leaving your dataset with a partially complete day’s worth of data. When it deletes a record it will also delete any records that have the same date value for that field.
If the delete_by field is a datetime field then only records with that exact same timestamp (i.e. same year, month, day, hour, minute, second, and millisecond) will be deleted.
Columns per dataset
Each dataset can contain up to 80 columns.
Records per request
Each PUT or POST request will accept 500 records, which includes both new records and updates to existing records.
Datasets per account
Each Geckoboard account is limited to 200 datasets. When the number of datasets is reached, no more datasets can be added. You can delete datasets via the API.
Update frequency
Widgets powered by datasets update in real-time when new data is received.
There's no limitation on the frequency that you can send an update to a dataset, as long as it falls within the rate limit, but the visualizations on your dashboards will only show changes up to every 10 seconds. We'd recommend not updating a dataset more frequently than this.
Visualization requirements
Your schema determines the visualizations that can be built with your dataset on a Geckoboard dashboard.
Make sure to include these types of data in your schema if you're building a particular visualization.
Number visualization
The number visualization is focused on the display of a metric that can be represented by a single number, along with optional associated secondary metrics, such as a change or trend indication.
Visualization type
Visualization example
Required types
Number
duration or money or number or percentage
Number with sparkline comparison
money or number or percentage
Number with specific time-based sparkline comparison
money or number or percentage and date or datetime
Number with percentage comparison
money or number or percentage
Number with number comparison
money or number or percentage
Number with goal comparison
money or number or percentage
Gauge visualization
Gauges are a great way of representing a single data point that fluctuates over time, like a speedometer in a car. The gauge is most useful to quickly see a metric in comparison to defined minimum and maximum values.
Visualization type
Visualization example
Required types
Gauge
duration or money or number or percentage
Gauge with needle on specific time-based value
duration or money or number or percentage and date or datetime
Line Chart
Line charts are best used to track changes over time, using equal intervals of time between each data point.
There are two ways to create multi-series line chart using the Datasets API. When creating a multi-series line chart, you’ll need to pick one:
Having Line Chart Series with identical data types (i.e. all of them money)
Having a string value in the dataset to “split by” (which automatically generates series for each different string value)
Visualization type
Visualization example
Required types
Line chart multi-series
duration or money or number or percentage
Line chart X-axis
duration or date or datetime
Column Chart
Column chart data is represented by rectangular bars with lengths proportional to the values that they represent. The column chart's discrete data is categorical data and answers the question of "how many?" in each category.
Visualization type
Visualization example
Required types
Column chart metric
duration or money or number or percentage
Column chart X-axis
duration or date or datetime or string
Multi-series column chart metric
duration or money or number or percentage
Bar Chart
Bar charts display data using horizontal rectangular bars, where the length of the bar is proportional to the data value. The bar chart's discrete data is categorical data and answers the question of "how many?" in each category.
Visualization type
Visualization example
Required types
Bar chart metric
duration or money or number or percentage
Bar chart X-axis
duration or date or datetime or string
Leaderboard visualization
Leaderboards are a visualization of achievement. Their goal is to make comparisons between people's (or item's) ranks.
Visualization type
Visualization example
Required types
Leaderboard label
string
Leaderboard value
money or number or percentage
Table visualization
Tables are used to display data from up to 10 columns from a dataset. There are two types of tables:
Raw data: A row for each record showing the raw data.
Summary: Aggregated data, grouped by a string or date.
Visualization type
Visualization example
Required types
Table raw data
Any data type as long as there are at least two of them
Table summary
duration or
money or number or percentage (for columns) and date or datetime or string (for grouping)