The flow

Learn how your data flows through Faraday from start to finish.

You can think of Faraday like a pipeline, flowing from your data, through the Faraday predictive machinery, then out into your application, where you use predictions to improve customer experiences.

Your data

Everything starts with your data—specifically, your customer data. This data can be stored anywhere thanks to Faraday's integrations. Faraday uses this data, along with its own built-in consumer data, to find patterns that predict your objectives.

Note that if you don't want to use your own customer data right away, you can use our representative sample data.


Faraday uses a connection to ingest your data from your systems. You can create a connection with any of our supported integrations.

While most users eventually create live connections to their data, many often start by skipping this step and uploading CSV files they've exported from their systems.


Create a dataset to describe what the data coming from your systems means. This involves two common components:

  1. Indicate how to recognize people in your data by choosing columns that map to common identifiers (name, zipcode, etc.).

  2. Define which event stream(s) your dataset contributes to (e.g. transactions, signups).

Datasets can also assert traits—details about the people in your data rather than events they've experienced—but that's less common.

Event stream

Datasets emit events into event streams, such as "Transaction" or "Signup."

This is the raw material that Faraday works with to find patterns and make predictions—one of two basic building blocks on the platform (the other being cohorts).


Cohorts are formal definitions of groups of people important to your organization, like "Customers." This, in addition to event streams, is the other basic building block of the Faraday system.

The most common way of defining a cohort is to include everybody who has experienced the same event. For example, you could define a Customers cohort as everyone who has experienced the Transaction event (at least once).

More complex cohort definitions can incorporate rules around frequency, value, traits, and even custom event properties.


An Objective is a behavior you want to predict. Faraday currently supports four different types of objectives:

  1. Outcome is for defining a propensity objective, such as likelihood to convert or churn.
  2. Persona set is for defining a clustering objective, which you can use to organize groups of people (like your customers) into coherent, thematic subgroups.
  3. Forecast is for defining a forecasting objective, which helps you predict, for a given event, the total value and frequency experienced by each person.
  4. Recommender is for defining a recommendation objective, which helps you predict which one of several options (such as products in a catalog) each given person is likely to choose.

Regardless of type, defining an objective is a simple process of choosing a cohort or stream that represents historical behavior. You don't need to understand data science or machine learning to make predictions with Faraday.


Moving along, a Pipeline (also called a Scope in the API for historical reasons) is what you use to choose which predictions you want made on which population. This lets Faraday make and prepare the necessary predictions for deployment.


Finally, a Deployment (also called a Target in the API) is what you use to declare how and where you want your pipeline deployed.

You can deploy to any connection using our built-in integrations, or choose to retrieve predictions individually in real time with the API.

Your application

This is the fun part. Now you get to incorporate your predictions into your application in order to build powerful predictive customer experiences, like lead scoring, next best offer, or anything else.

Now that you've got the basics of the Faraday flow, it's time to start building!