Itility Data Factory

Together we smartify your data

Itility Data Factory runs at:

Clients Data Factory

Itility Data Factory

The platform you need to turn your data into intelligence
to do business smarter

All companies cherish their data. From data we seek the intelligence that will enable us to deliver better products and services. But handling data is tough. Deriving intelligence from data requires intensive slicing, dicing, and learning from large quantities of raw data. Achieving intelligence is not accomplished in a single step. It is a process performed iteratively to yield continuous insight. Once data has been turned into intelligence, it needs to be embedded into your business to generate value.

A well-implemented data factory is a prerequisite to achieve all this. Data goes in, intelligence comes out. It allows you to gather your data, to process it, and to embed intelligence directly into your business. 

For instance via an app or optimized process.  A data factory is the means by which to become data driven in the truest sense.

The Itility Data Factory brings together the competencies and tooling that is needed for this. It supports your data scientist in the most optimal way, containing ingestion standards, pre-fab data science models, security features, and ‘run’ practices. Our analytics DevOps team helps your specialists with data ingestion, processing, and modeling — and finally embeds the results into your business. We implement on your data lake or ours, and on any cloud; the key stays with you. 

All companies cherish their data. From data we seek the intelligence that will enable us to deliver better products and services. But handling data is tough. Deriving intelligence from data requires intensive slicing, dicing, and learning from large quantities of raw data. Achieving intelligence is not accomplished in a single step. It is a process performed iteratively to yield continuous insight. Once data has been turned into intelligence, it needs to be embedded into your business to generate value.

A well-implemented data factory is a prerequisite to achieve all this. Data goes in, intelligence comes out. It allows you to gather your data, to process it, and to embed intelligence directly into your business. For instance via an app or optimized process.  A data factory is the means by which to become data driven in the truest sense.

The Itility Data Factory brings together the competencies and tooling that is needed for this. It supports your data scientist in the most optimal way, containing ingestion standards, pre-fab data science models, security features, and ‘run’ practices. Our analytics DevOps team helps your specialists with data ingestion, processing, and modeling — and finally embeds the results into your business. We implement on your data lake or ours, and on any cloud; the key stays with you.

Itility Data Factory provides a platform for turning your data into continuous insights; to gain maximum value that helps your business forward. A true implementation of applied analytics.

Itility Data Factory provides a platform for turning your data into continuous insights; to gain maximum value that helps your business forward. A true implementation of applied analytics.

Using modern technology, we enforce configurations with high repeatability, scalability and “first time right” delivery. We solve critical issues before they occur, by collecting and analyzing metrics of your IT landscape.

We visualize your IT landscapes in an unprecedented way: how it lives, where the costs are, how the end-users perceive its performance and where you should consider security improvements. ICC offers an API for accessing the raw data and through this facilitates any data consumer such as your security officer or the financial controller responsible for charging the services.

A platform to stay in control of your IT landscape while implementing functional and maintenance driven changes. Continuous integration and delivery while maintaining flawless operations.

Benefits

What does it mean for your company?

Ingestion pipelines ensure your data flow is smooth, continuous, and governed. Code libraries, coding standards, version control, and pipelines that automate testing, ensure that the data team can visualize and create models in a repeatable way. Options for tight user-access control are available at multiple levels, integrated with your enterprise directory. Encryption is optional, and access logging is available as with any data factory. Multiple built-in security features ensure compliance with GDPR requirements.

With our platform — consisting of pre-installed data science libraries, a bunch of coding skeletons, and visualization features — data scientists can perform analytics and data science in the most efficient and effective way. They are not bothered with the technical hassle, instead, they drive the pre-processing, and create and train data-science models in an agile way of working. By doing this they can smartify data fast, achieving results in as little as a few days – or fail fast and move to the next use case. And if a use case is proven to drive business value, it can quickly be productionalized to run. 

We also assist your data scientists by providing pipelines to handle the workflow, including automated testing of the created models and their versions.

Automation pipelines in the data factory make it very easy to spin up a development environment to test varied machine-learning models. Built-in code libraries and tooling will speed development. And as soon as a model has proven its value, the deployment pipeline enables you to rapidly deploy to production. Version control, automated unit tests, and integration tests ensure that this occurs in a controlled and repeatable manner.

After creating and extensively validating the models, the next step is to embed the results in your business. Together with your domain expert, we will decide what is the best way to do this. Perhaps the output is used to change a working process. Or software engineers turn the outputs into an end-user software application. Or software is used to transform human activities into machine-autonomous behavior.

The data scientist and data engineer can develop using Python, R, or Scala. We visualize with PowerBI, Splunk, Dash, and Shiny – whatever best suits your case. Any library can be used, and frameworks such as Tensorflow, PyTorch, and Spark are readily available and can scale on demand. Our data lake factory infrastructure can be based on Hadoop (main distributions), Azure Data Lake, AWS Data Lake, or Splunk. And when new technology is required in the data factory, it is added easily, rapidly, and embedded in the data pipelines.

Itility Data Factory: how does it work?

Benefits of the Itility Data Factory for a data scientist

The data factory flow

An end-to-end data flow requires quite a skill set

We work in agile teams to manage development as well as operations (DevOps). Our DevOps teams have the multidisciplinary skill set that a data factory demands. We run your data as a factory with you as the factory lead, product owner, and data owner.

We are still in a phase where data scientists try to build houses with their bare hands. In addition to needing access to an adequate toolset, a comprehensive skill set is required to use it effectively.

  • The infrastructure/data engineer sets up the data lake platform including all security measures and connectivity, and takes care of the automated ingestion of data and the storage of raw data. Then the data engineer transforms the raw data into enriched data by slicing and dicing, aggregating and filtering, combining with other data sources, and determining the appropriate schemas. The data engineer also performs data processing, monitoring, and data quality checks.
  • Once pre-processed, the data analyst can examine the data to visualize and report on the current and past values, whereas the data scientist can model the data to predict and automate future states, using machine learning models, optimization algorithms, and regressions.
  • To embed the results into the day-to-day operations, or even into an autonomous system, software engineers and business analysts are required.

How do we do it?

In 3 steps we will turn your data into value

Discover

Together with the domain expert, we first generate visualizations and hypotheses of where the hidden value could be and how data science could improve your system or business process.
Let the data scientists transform data into intelligence — in close cooperation with your domain experts. Our vast experience with varied customers and diverse use cases will speed the process of retrieving value from data. In the Discover phase, we design your data factory based upon the Itility Data Factory, and prepare the foundation for industrializing your applied analytics.

Industrialize

Your data begins to flow in a controlled manner through the data factory. Here, your data is securely ingested, stored, and processed. The factory runs a number of pipelines.
Pipelines drive data ingestion and thus ensure that data flows flawlessly and continuously. Data science pipelines are set up and are used for model training, testing, and validation in a controlled way. Visualizations produce intermediate results, which can be used by the domain expert to integrate data science.

Embed

The models and analyses that emerge from the data factory must be embedded to yield a day-to-day business value.

Where do we start?

In a quick data deep-dive we analyse one of your data sets for smartification, and show how a first Discover cycle will look like and what the result could be.

Contact us — and request a demo of the Itility Data Factory in which we demonstrate all three steps.

Where do we start?

In a quick data deep-dive we analyse one of your data sets for smartification, and show how a first Discover cycle will look like and what the result could be.

Contact us — and request a demo of the Itility Data Factory in which we demonstrate all three steps.

Geert Vorstermans

"Let's smartify your data together"

Geert Vorstermans

"Let's smartify your data together"

References

Use cases from our customers

Organizations are wasting time by reinventing the wheel over and over, starting from scratch to create data lake infrastructures. Why not simply identify the desired functional, cost, and security benefits — and leave the build and run to the experts? Many of our customers have opted for this approach. The gains are visible in the areas of increased productivity, better diagnostics, and better controls.

amber-24

Amber provides an electric car service offered on a pay-per-minute basis. Cars can be picked up and dropped off at strategically positioned hubs — there are currently 45 hubs in the Netherlands.

Amber chose the Itility Data Factory as a platform to design and run a prediction algorithm as part of its daily hub-replenishment activities. Faulty replenishment drives up costs when a car must be provisioned from the in-place emergency car pool. Based on the algorithm outcome, the hubs are being replenished.

Benefits:
Decreased costs by optimizing hub replenishment.
Automated and controlled data modeling via an automatic feed to the replenishment app.
Data ingestion from secondary sources, such as weather and vacation schedules, and storing and processing the data securely in the high-availability set-up of the data factory.

Specifics:
Reference visit and demo available
Runs on Azure Cloud
Ampleon
Ampleon, a carve-out from NXP, seeks to continuously optimize its production facilities, digitally transforming manufacturing operations by deploying edge computing, sensors, and cloud-based solutions. Ampleon has chosen the Itility Data Factory cloud-based solutions to automatically ingest and process data from a varied sources.

Ampleon domain experts work closely with Itility analysts to visualize insights, and report them to the Factory Operations, Equipment, and Test Engineering teams who use them to optimize maintenance practices and predict factory machinery output.

Benefits: 
Standardized operations reporting to support a paperless factory.
Automated data ingestion from multiple data sources via pipelines.
Ever-growing insight into factory output to improve yield and enable traceability.
Incorporation of data science methods for self-optimization and self-diagnosis.

Specifics:
Reference visit possible
Runs on Azure Cloud

Stories

Dive deeper into our stories

ITILITY NL

Flight Forum 3360
5657 EW Eindhoven
The Netherlands

+31 (0)88 00 46 100
info@itility.nl
www.itility.nl

ITILITY US

840 North Hillview Drive
Milpitas, CA 95035
United States

info@us-itility.com
www.itility.us

© Copyright – Itility 2018