Together we smartify your data

Itility Data Factory runs at:

Itility Data Factory

The platform to turn your data into intelligence
to do business smarter

All companies cherish their data. From data we seek the intelligence that will enable us to deliver better products and services. However, handling data is tough.  Achieving intelligence is not accomplished in a single step. It is a process performed iteratively to yield continuous insight. Once data has been turned into intelligence, it needs to be embedded into your business to generate value.

A well-implemented data factory is a prerequisite to achieve all this. Data goes in, intelligence comes out. It allows you to gather your data, to process it, and to embed intelligence directly into your business. A data factory is the means by which to become data driven in the truest sense.

The Itility Data Factory brings together the competencies and tooling that is needed for this. It supports your data scientist in the most optimal way, containing ingestion standards, pre-fab data science models, security features, and ‘run’ practices. Our analytics DevOps team helps your specialists with data ingestion, processing, and modeling — and finally embeds the results into your business. We implement on your data lake or ours, and on any cloud; the key stays with you.

Itility Data Factory provides a platform for turning your data into continuous insights; to gain maximum value that helps your business forward. A true implementation of applied analytics.

Benefits

What does it mean for your company?

Ingestion pipelines ensure your data flow is smooth, continuous, and governed. Code libraries, coding standards, version control, and pipelines that automate testing, ensure that the data team can visualize and create models in a repeatable way. Options for tight user-access control are available at multiple levels, integrated with your enterprise directory. Encryption is optional, and access logging is available as with any data factory. Multiple built-in security features ensure compliance with GDPR requirements.

With our platform — consisting of pre-installed data science libraries, a bunch of coding skeletons, and visualization features — data scientists can perform analytics and data science in the most efficient and effective way. They are not bothered with technical hassle. Instead, they drive the pre-processing, and create and train data-science models in an agile way of working. By doing this, they can smartify data fast, achieving results in as little as a few days – or fail fast and move to the next use case. And, if a use case is proven to drive business value, it can quickly be productionalized to run. 

We also assist your data scientists by providing pipelines to handle the workflow, including automated testing of the created models and their versions.

Automation pipelines in the data factory make it very easy to spin up a development environment to test varied machine-learning models. Built-in code libraries and tooling will speed development. And as soon as a model has proven its value, the deployment pipeline enables you to rapidly deploy to production. Version control, automated unit tests, and integration tests ensure that this occurs in a controlled and repeatable manner.

After creating and extensively validating the models, the next step is to embed the results in your business. Together with your domain expert, we will decide what is the best way to do this. Perhaps the output is used to change a working process. Or software engineers turn the outputs into an end-user software application. Or software is used to transform human activities into machine-autonomous behavior.

The data scientist and data engineer can develop using Python, R, or Scala. We visualize with PowerBI, Splunk, Dash, and Shiny – whatever best suits your case. Any library can be used, and frameworks such as Tensorflow, PyTorch, and Spark are readily available and can scale on demand. We can build your data lake factory infrastructure on the technology you prefer: based on Hadoop (main distributions), Azure Data Lake, AWS Data Lake, or Splunk. Other technologies can be added. You can also rely on our standard data factory, instantly available, managed as a service, and based on Azure and Databricks technology.

The data factory flow

An end-to-end data flow requires quite a skill set

We work in agile teams to manage development as well as operations (DevOps). Our DevOps teams have the multidisciplinary skill set that a data factory demands. We run your data as a factory with you as the factory lead, product owner, and data owner.

Infra/Data engineer

Ingest

The infrastructure / data engineer sets up the data lake platform including all security measures and connectivity, and takes care of automated ingestion of data and the storage of raw data.

Data engineer

Pre-process

The raw data is transformed into enriched data by slicing and dicing, aggregating and filtering, combining with other data sources, and deciding on the appropriate schemas. The data engineer also performs data processing, monitoring, and data quality checks.

Data analyst

Visualize

Once pre-processed, the data analyst can take the data to visualize and report on the current and past values.

Data scientist

Model

The data scientist models the data to predict and automate future state via machine learning models, optimization algorithms, and regressions.

Software engineer

Embed via software

To embed the results into an autonomous system, our software engineers work together with your domain experts to translate the model outcomes into an application.

Business analyst

Embed via process

Our business analyst then collaborates with your domain experts to define changes in the business processes in order to embed the results into day-to-day operations.

The Data Factory flow summarized:

How does it work?

In 3 cycles the data factory will turn your data into value

Discover

Together with the domain expert, you generate visualizations and hypotheses of where the hidden value could be. Together we define, model and verify new value streams. The first intelligent use cases are born and proven.

Industrialize

Your data is flowing in a controlled manner through the data factory. Here, your data is securely ingested, stored, and processed. The factory runs a variety of these data pipelines, used for further learning. In parallel, the pipelines are embedded in your daily processes to offer value in a consistent and uninterrupted way.

Learn

The domain experts won’t sit still. The factory offers ways to further improve and tweak the models toward even more added value. True digital transformation is about doing, learning, and adapting.

Where do we start?

In a quick data deep-dive we analyze one of your data sets for smartification, and show how a first Discover cycle will look like and what the result could be.

Contact us — and request a demo of the Itility Data Factory in which we demonstrate all three steps.

Where do we start?

In a quick data deep-dive we analyze one of your data sets for smartification, and show how a first Discover cycle will look like and what the result could be.

Contact us — and request a demo of the Itility Data Factory in which we demonstrate all three steps.

Geert Vorstermans

"Let's smartify your data together"

Geert Vorstermans

"Let's smartify your data together"

References

Use cases from our customers

References

Use cases from our customers

Organizations are wasting time by reinventing the wheel over and over, starting from scratch to create data lake infrastructures. Why not simply identify the desired functional, cost, and security benefits — and leave the build and run to the experts? Many of our customers have opted for this approach. The gains are visible in the areas of increased productivity, better diagnostics, and better controls.

amber-24

Amber provides an electric car service offered on a pay-per-minute basis. Cars can be picked up and dropped off at strategically positioned hubs — there are currently 45 hubs in the Netherlands.

Amber chose the Itility Data Factory as a platform to design and run a prediction algorithm as part of its daily hub-replenishment activities. Faulty replenishment drives up costs when a car must be provisioned from the in-place emergency car pool. Based on the algorithm outcome, the hubs are being replenished.

Benefits:
Decreased costs by optimizing hub replenishment.
Automated and controlled data modeling via an automatic feed to the replenishment app.
Data ingestion from secondary sources, such as weather and vacation schedules, and storing and processing the data securely in the high-availability set-up of the data factory.

Specifics:
Reference visit and demo available
Runs on Azure Cloud
Ampleon
Ampleon, a carve-out from NXP, seeks to continuously optimize its production facilities, digitally transforming manufacturing operations by deploying edge computing, sensors, and cloud-based solutions. Ampleon has chosen the Itility Data Factory cloud-based solutions to automatically ingest and process data from a varied sources.

Ampleon domain experts work closely with Itility analysts to visualize insights, and report them to the Factory Operations, Equipment, and Test Engineering teams who use them to optimize maintenance practices and predict factory machinery output.

Benefits: 
Standardized operations reporting to support a paperless factory.
Automated data ingestion from multiple data sources via pipelines.
Ever-growing insight into factory output to improve yield and enable traceability.
Incorporation of data science methods for self-optimization and self-diagnosis.

Specifics:
Reference visit possible
Runs on Azure Cloud

Stories

Dive deeper into our stories

ITILITY NL

Flight Forum 3360
5657 EW Eindhoven
The Netherlands

+31 (0)88 00 46 100
info@itility.nl
www.itility.nl

ITILITY US

840 North Hillview Drive
Milpitas, CA 95035
United States

info@us-itility.com
www.us-itility.com

© Copyright – Itility 2019