Send SQL data to datasets (macOS and Linux)

SQL-Dataset is a command line app that quickly and easily integrates your MySQL, Postgres and SQLite databases with Geckoboard. Rather than having to work with client libraries and write code to connect to and query your database, with SQL-Dataset all you need to do is fill out a simple config file and run the program from the command line.

Running SQL-Datasets on a Windows machine or server?

If you plan to run SQL-Datasets on a Windows machine or server, follow the steps given in this Windows set-up guide.

Downloading the SQL-Dataset app

To get started with SQL-Dataset, download the latest version of the app for your specific OS and system type.

After that, follow the steps below:

Make it executable

To make the app executable on macOS and Linux, you'll need to open a terminal and run chmod u+x [path/to/file], where [path/to/file] is the path to your downloaded app.

Here's what an example [path/to/file] looks like:


Create a config file

SQL-Datasets works by reading all of the information it needs from a YAML file.


To help get you started, we've prepared an example.yml config.

Example config file

Here's what an example config file looks like:

geckoboard_api_key: your_api_key
 driver: mysql
 host: xxxx
 port: xxxx
 username: xxxx
 password: xxxx
 name: xxxx
  ca_file: xxxx
  key_file: xxxx
  cert_file: xxxx
  ssl_mode: xxxx
refresh_time_sec: 60
 - name:
   update_type: replace
   sql: >
    SELECT 1, 0.34, source
    FROM table
    - type: number
      name: Signups
    - type: percentage
      name: Conversion rate
    - type: string
      name: Source

If you wish, you can provide any of geckoboard_api_key, host, port, username, password and (database) name as environment variables with the syntax "{{ YOUR_CUSTOM_ENV }}". Make sure to keep the quotes in there! Here's an example:

geckoboard_api_key: "{{ GB_API_KEY }}"


This is where your Geckoboard API key goes. You can find yours by following these steps.


Enter the type of database you're connecting to in the driver field. SQL-Dataset supports:

  • mssql
  • mysql
  • postgres
  • sqlite

Get in touch

If you'd like to see support for another type of database, let our Customer Success team know or, if you're technically inclined, make the change and submit a pull request!

Only three parameters are required:

  • driver
  • username
  • name

The other attributes, such as host and port, will default to their driver-specific values unless overridden.


If your database requires a CA cert or a x509 key/cert pair, you can supply this in tls_config under the database key.

 ca_file: /path/to/file.pem
 key_file: /path/to/file.key
 cert_file: /path/to/cert.crt
 ssl_mode: (optional)

The possible values for ssl_mode depend on the database you're using:

disable, false, true – try disable option if you experience connection issues.
true, skip-verify.
disable, require, verify-ca, verify-full.
Not applicable.

Important note

We strongly recommend that the user account you use with SQL-Dataset has the lowest level of permission necessary.

For example, one which is only permitted to perform SELECT statements on the tables you're going to be using. Like any SQL program, SQL-Dataset will run any query you give it, which includes destructive operations such as overwriting existing data, removing records, and dropping tables.

We accept no responsibility for any adverse changes to your database due to accidentally running such a query.


Once started, SQL-Dataset can run your queries periodically and push the results to Geckoboard. Use this field to specify the time, in seconds, between refreshes.

If you do not wish for SQL-Dataset to run on a schedule, omit this option from your config.


Here's where the magic happens - specify the SQL queries you want to run, and the Datasets you want to push their results into.

The name of your Dataset.
Your SQL query.
The schema of the Dataset into which the results of your SQL query will be parsed.
Either replace, which overwrites the contents of the Dataset with new data on each update, or append, which merges the latest update with your existing data.
An optional array of one or more field names whose values will be unique across all your records. When using the append update method, the fields in unique_by will be used to determine whether new data should update any existing records.


A Dataset can hold up to 10 fields. The fields you declare should map directly to the columns that result from your SELECT query, in the same order.

SQL-Dataset supports all of the field types supported by the Datasets API:

  • date
  • datetime
  • number
  • percentage
  • string
  • money

Here's an example:

sql: SELECT date, orders, refunds FROM sales
 - name: Date
   type: date
 - name: Orders
   type: number
 - name: Refunds
   type: number

The money field type requires a currency_code to be provided:

 - name: MRR
   type: money
   currency_code: USD

Numeric field types can support null values. For a field to support this, pass the optional key:

 - name: A field which might be NULL
   type: number
   optional: true

The Datasets API requires both a name and a key for each field, but SQL-Dataset will infer a key for you. Sometimes, however, the inferred key might not be permitted by the API. If you encounter such a case, you can supply a specific key value for that field.

 - name: Your awesome field
   key: some_unique_key
   type: number

Run the script

Make sure that the SQL-Dataset app and your config file are in the same folder, then from the command line navigate to that folder and run (do not forget the period . required at the beginning of the command!):

.[your folder path]/sql-dataset -config config.yml

Where config.yml is the name of your config file. Once you see confirmation that everything ran successfully, head over to Geckoboard and start using your new Dataset to build widgets.

What happens if my SQL query returns no data?

The SQL-Datasets app has a built-in feature whereby if your SQL query within the config file returns no rows of data, the following will occur:

For Append calls (i.e. POST): No new data will be pushed. The logic here being there's nothing to append to the dataset. The corresponding widget(s) powered by that dataset will continue to show the previous data.

For Replace calls (i.e. PUT): This will wipe clean all the existing data in the dataset and will leave behind an empty dataset. The dataset itself and its schema will be preserved though. The corresponding widget(s) will display a No data to visulize message.

Was this article helpful?

Awesome! 👍 Thanks so much for your feedback!

Sorry about that! Start a conversation now.

We're ready to help, 24 hours a day, 5 days a week

Get support
  • Fernanda Customer Success Avatar


  • Hariharan Customer Success Avatar


  • Fernanda Customer Success Avatar


  • Luis Customer Success Avatar


  • Richard Customer Success Avatar


  • Fernanda Customer Success Avatar


  • Heather Customer Success Avatar