Trillium Discovery
Investigating data quality is like looking for a bunch of needles in a haystack. On top of that you need to involve people that can recognise and signify the needle.
For that you need a solution that is designed to help business users perform data quality assessments on large data volumes and that enables them to easily monitor data quality on a constant basis.
Automated out of the box data profiling capability.
Discover data structure; generate data statistics.
Analyse data content; identify personal data and data relationships.
Identify data dependencies, keys and joins.
Create and validate business rules.
Quantify and prioritise data quality issues.
Report on data quality metrics for accuracy, consistency and completeness.
Monitor quality thresholds and trends over time.
Key capabilities
-
Advanced profiling
Profiling of a wealth of basic statistics.
Plus advanced profiling options such as pattern analysis, soundex routines, meta-phones, natural key analysis.
Join analysis & dependency analysis.
Comparisons against defined data standards, and regulations with business rules.
-
Monitoring
Monitor data sources for specific events and errors.
Notify users when data values do not meet requirements, like unacceptable data values or incomplete fields.
Gives users the environment necessary to understand the true nature of their current data landscape and how data relates across systems.
-
Model control functions
Data architects and data modellers can rely on results from key integrity, natural key, join, and dependency analysis.
Physical data models can be produced through the effects of reverse engineering the data, to validate models and identify problem areas.
Venn diagrams can be used to identify outlier records and orphans.
Business UX/UI
The user interface is designed specifically for a business user. It is intuitive, easy to use, and allows for immediate drill down for further analysis, without hitting production systems.
Collaborative environment
Team members can log into a common repository, view the same data, and contribute to prioritising and determining appropriate actions to take for addressing anomalies, improvements, integration rules, and monitoring thresholds.
-
Comprehensive repository
The repository stores metadata created by reverse-engineering the data.
This metadata can be summarised, synthesised, drilled down into, and used to recreate original source record replicas.
Business rules and data standards can be developed within the repository to run systematically against production systems, complete with alert notifications.
-
Trillium Quality
Data can be cleansed and standardised directly using Trillium Quality.
Name and address cleansing, address validation, and recoding processes can be run using Trillium Quality.
Cleansed data is placed into new fields, never overwriting source data.
The cleansed files can be used immediately in other systems and business processes.
-
Integrates with Data360 Govern
Publish business rule metrics, quality dimensions, profile statistics, data lineage automatically to Data360 Govern.
Ensure that rules defined in your governance solution are delivered to Trillium Discovery for evaluation.
And that the resulting data quality metrics are made available to the governance tool to view quality results.