Personalize your outreach

This tutorial uses the Faraday API to segment your customers into "personas." You upload customer identifiers and we provide all of the responsibly sourced third-party data necessary to cluster them.

😅This tutorial seems long, but it's only 6 POST requests to transform a raw CSV of orders into finished predictions. Go forth and conquer!

📘You can't accidentally incur charges

The steps in this guide, including generating your bespoke personas, are completely free. You won't be charged until you want to start retrieving the persona assignments at scale.

Account & credentials

Create a free account if you haven't already. You will immediately get an API key that works for test data.

Prepare and send your data

You are ready to send some data over to Faraday. This is done by placing your data into a CSV file and sending it through the API.

📘Sample data

Don't have access to customer data just yet? No problem — grab our sample data from the Testing page.

Make a CSV

Since this tutorial is based on your customers, your data source may be an export of your orders, for example, but it could also be a list of users from your CRM or other marketing tools. You will need to format your data as a CSV. See Sending data to Faraday for examples and validation details.

Here's an example list of columns in an valid CSV:

  • customer ID
  • first name
  • last name
  • address
  • city
  • state

But you could also (or alternatively) include:

  • email
  • phone

🚧️Include a header row

Your CSV file should have a "header" row, but you can use any headers you like. We suggest using recognizable headers that make sense to you.

Uploading your CSV

After preparing your CSV file, you are going to upload it using the API's upload endpoint.

Note that you will always upload your files to a subfolder underneath uploads. The below example uploads a local file named acme_orders.csv to a folder and file on Faraday at orders/file1.csv. You can pick whatever folder name and filename you want: we will use it in the next step. You can even upload multiple files with the same column structure into the same folder if that's easier — they'll all get merged together. This is especially useful if you want to update your model over time - for example, as new orders come in.

curl --request POST \
     --url https://api.faraday.ai/v1/uploads/orders/file1.csv \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer YOUR_API_KEY' \
     --header 'Content-Type: application/octet-stream' \
     --data-binary "@acme_orders.csv"

Mapping your data

Once your file has finished uploading, Faraday needs to know how to understand it. You'll use Datasets to define this mapping.

If you're using the sample file, check out Testing for an example API call that includes the right field configuration.

curl --request POST \
     --url https://api.faraday.io/v1/datasets \
     --header 'Accept: application/json' \
     --header "Authorization: Bearer YOUR_API_KEY" \
     --header "Content-Type: application/json" \
     --data '
{
    "name": "orders_data",
    "identity_sets": {
        "customer": {
            "house_number_and_street": [
                "address"
            ],
            "person_first_name": "first_name",
            "person_last_name": "last_name",
            "city": "city",
            "state": "state"
        }
    },
    "output_to_streams": {
        "orders": {
            "data_map": {
                "datetime": {
                    "column_name": "date",
                    "format": "date_iso8601"
                },
                "value": {
                    "column_name": "total",
                    "format": "currency_dollars"
                }
            }
        }
    },
    "options": {
        "type": "hosted_csv",
        "upload_directory": "orders"
    }
}
'

Let's break down the above example.

  • upload_directory — Here you are telling Faraday which files we're talking about by specifying the subfolder you uploaded your data to, e.g. orders in our above example. If there are multiple files in this folder (and they all have the same structure), they will be merged together.
  • identity_sets — Here's where you specify how Faraday should recognize the people in each of your rows. Your data may have multiple identities per row, especially in lists of orders where you may have separate billing and shipping info. Our example above creates an arbitrary identity name customer. It uses email (mapping the 'account_email' column from our CSV file to the 'email' field Faraday expects), but if you have names, addresses, or phone numbers it's important to include them to improve identity resolution. Faraday will always use the best combination of available identifiers to recognize people. Mapping options are available in Datasets.
  • output_to_streams — Here's where you tell Faraday how to recognize events in your data. Here, we're calling our events orders, because that's how many companies define their customers' transactional behavior, but you can use any name you like, and one dataset may represent multiple event types. You can use the datetime field to specify when the event occurred—in this case, updated_at column from the CSV. You can also include metadata about products involved in the event and a dollar value associated with the event. Though all of these fields are optional.

Create a cohort

Now you're going to use this identity and event data to formally define the group of people you're trying to organize into personas: your customers. You will reference this specific group of people both when you define your persona objective and when you later want to generate predictions. For this tutorial, you want to include all the people in the dataset you created, creating a cohort from it. All you have to do is point to the orders stream you created above and give your cohort a name like "Customers." Like so:

curl --request POST \
     --url https://api.faraday.io/v1/cohorts \
     --header "Authorization: Bearer YOUR_API_KEY" \
     --header "Content-Type: application/json" \
     --data '
{
     "name": "Customers",
     "stream_name": "orders"
}
'

You'll need the UUID of the cohort you just created in the next step, so copy it now!

Build your personas

Now that you've formally defined your customer group, it's time to move onto prediction. For this tutorial, we're going to create a Persona Set from your customers, which will use ML to cluster your customers into a handful of coherent groups.

You will take the cohort UUID returned in the previous step and use it to make the following call to create a persona set:

curl --request POST \
     --url https://api.faraday.io/v1/persona_sets \
     --header "Authorization: Bearer YOUR_API_KEY" \
     --header "Content-Type: application/json" \
     --data '
{
     "cohort_id": "YOUR_COHORT_ID",
     "name": "PERSONAS_NAME"
}
'

When you create this persona set, Faraday starts building and validating the appropriate ML model behind the scenes. Remember to save the UUID you get back in your response.

Learn about your personas

Now that the Persona Set is built, we have a set of personas you can use to better understand and organize your customers. In order to use them effectively, you'll have to understand what we found! To do that, you can retrieve your Persona Set:

curl --request GET \
     --url https://api.faraday.ai/v1/persona_sets/YOUR_PERSONA_SET_ID \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer YOUR_API_KEY'

Generate persona assignments (predictions)

Finally, you can tell Faraday who you may want persona assignments (predictions) for, and then retrieve those results.

Set up your scope

To do this, you will first create a Scope—this is how you tell Faraday which predictions you may want on which populations. You'll need two UUIDs from above:

  1. The persona set object you created
  2. The cohort you created earlier (customers)
curl --request POST \
     --url https://api.faraday.ai/v1/scopes \
     --header 'Accept: application/json' \
     --header 'Authorization: Bearer YOUR_API_KEY' \
     --header 'Content-Type: application/json' \
     --data '
{
     "payload": {
          "persona_set_ids": [
               "YOUR_PERSONA_SET_ID"
          ]
     },
     "population": {
          "cohort_ids": [
               "YOUR_CUSTOMERS_COHORT_ID"
          ]
     },
     "name": "SCOPE_NAME",
     "preview": false
}
'

Checking scope status

Faraday proactively makes and caches the prediction you defined in your scope, which may take some time. To see if your scope is ready, you can fetch https://api.faraday.ai/v1/scopes/{scope_id}.

Deploying predictions

Now it's time to download the persona assignment results! The simplest way to do this is to retrieve them all in a single CSV file.

Add a target

First you'll add a Target to your scope with type hosted_csv.

curl --request POST \
     --url https://api.faraday.io/v1/targets \
     --header "Authorization: Bearer YOUR_API_KEY" \
     --header "Content-Type: application/json" \
     --data '
{
     "name": "personas_csv_export",
     "options": {
          "type": "hosted_csv"
     },
     "representation": {
          "mode": "hashed"
     },
     "scope_id": "YOUR_SCOPE_ID"
}
'

Retrieve your CSV

Use the tool of your choice to download your CSV:

curl --request GET \
     --url https://api.faraday.ai/v1/targets/YOUR_TARGET_ID/download.csv \
     --header "Authorization: Bearer YOUR_API_KEY" \
     --header "Accept: application/json" > my_local_file.csv
open my_local_file.csv

🚧️Preview mode

If a scope is in preview mode, you will only get a sample of the complete results back. This helps you validate the results you're getting and build your integrations before incurring charges.