Dremio’s open source, self-service data platform addresses the key data analysis issues that challenge modern organizations. This includes acceleration of queries on massive data sets for interactive analytics with BI and data science tools; the complexity of connecting data from disparate sources like data lakes, NoSQL databases, and relational databases; and long lead times for data engineering tasks. Dremio accelerates time-to-insight by empowering analysts and data scientists to be independent and self-directed in their use of data across the enterprise, while preserving governance and security, and providing insight into data lineage.
Dremio transparently enables unprecedented time-to-insight and, unlike traditional approaches which require building a data warehouse or rely on point-to-point single server designs, Dremio connects any analytical process to any data source and scales from one to 1,000 plus servers, running in the cloud, on dedicated hardware, or in a Hadoop cluster,” said Kelly Stirman, vice president strategy and CMO, Dremio. “With Dremio data consumers can perform critical data tasks themselves, without being dependent on IT. We bring self-service to the entire analytics stack. With this approach analysts and data scientists are more independent and self-directed.”
Enhancements to Data Reflections
Data Reflections™ provide a breakthrough in performance that transparently accelerates processing by up to a factor of 1000x. Unlike traditional cubes, extracts and data marts, Data Reflections leverage Apache Arrow for breakthrough performance gains and are entirely invisible to end users – Dremio’s query planner automatically selects the best reflections to accelerate queries for ad hoc requests, BI, and data science workloads using libraries such as TensorFlow and Scikit-learn. This release includes:
- Starflake Data Reflections. Dremio can now automatically detect star and snowflake schemas, and other variations of joined datasets, and accelerate a wide range of queries that involve all tables or a subset thereof. This capability greatly improves user experience, simplifies administration, and lowers the cost of deploying workloads on Dremio.
- 100x Improvement in Reflection Management Scalability. This release includes a new reflection management engine based on a state of the art relational algebra-based dependency graph. The engine automatically optimizes prioritizing, ordering and queueing reflection refreshes, as well as sophisticated error recovery. These improvements reduce management overhead, speed reflection updates, and significantly reduce resource utilization for maintenance tasks.
- Native Support for Cloud Data Lakes. Now users can take advantage of cost effective object stores for Data Reflections, including Amazon S3 and Azure Data Lake Store. Unlike traditional technologies, Dremio separates compute and storage capabilities to allow for independent scaling of resources, and provides optimal in-memory performance with the cost advantages, elasticity, and unlimited scalability of cloud object stores.
- Vectorized Processing with Apache Arrow. Enhancements to Apache Arrow in this release created by Dremio engineers provide up to a 60% reduction in query latency across a wide range of workloads. As a result, end user experience is improved and users can deploy greater workloads without increasing operational costs.
Dremio Learning Engine
New in this release, the Dremio Learning Engine makes recommendations and adapts to evolving workloads, making it easier for users to be more productive in their use of the platform and for workloads to be deployed more efficiently with fewer infrastructure and administrative resources. This release includes:
- Recommended Joins. Dremio will recommend complementary datasets to users as they work to curate data for analysis. Dremio learns how data sets can be joined based on observed behavior across all workloads with support for all join types.
- Schema Learning. Dremio automatically observes data during query execution to detect schema changes in source systems, then adapts its Data Catalog automatically. This is essential for modern sources (e.g., Elasticsearch, MongoDB, JSON) where schema can vary from record to record, and for systems with evolving schemas. This enhancement makes data available to data consumers more quickly, with less intervention by administrators to manually synchronize schemas.
- Predictive Metadata Caching. For large deployments with millions of tables, partitions, collections, and indexes, Dremio will intelligently cache and index metadata into Dremio’s Data Catalog, taking into account the access patterns to those data sets. These changes make large deployments more efficient, and allow data consumers to have the most up to date sense of source data for the most relevant data sets.
Dremio is designed to seamlessly integrate with existing data sources, as well as the favorite tools of data consumers, including BI and data science tools. With mission critical deployments at many organizations, Dremio has continued to simplify operation of production environments, including the following capabilities:
- Automatic Failover. In the event of node and instance failures in a cluster, Dremio will automatically elect a new master coordinator node without requiring manual intervention. This allows companies to provide always-on availability to users while minimizing operational overhead.
- Workload Management. New administrative controls in this release allow for more granular workload management, with new options for system and user concurrency, query queue size, and memory threshold controls that allow administrators to meet varying performance needs of different users without unnecessary spend on cluster resources.
- Dynamic Granular Access Controls. Dremio integrates with centralized security controls like LDAP and Kerberos. New in this release, users can programmatically control access at both the row and column level based to dynamically secure, mask, and transform data for end user access, helping users to meet the stringent demands of new privacy controls such as GDPR.
- REST APIs. Now users can interact with Dremio through a comprehensive set of REST APIs, allowing DevOps teams to orchestrate Dremio with other components of their technology stacks, and end users to more easily build web applications directly on top of Dremio.
Support for Looker
This release also adds support for Looker. Users of Looker can now take advantage of Dremio’s powerful data acceleration capabilities for relational databases, as well as NoSQL databases such as MongoDB, Elasticsearch, and data lakes built on Hadoop, Amazon S3, and Azure Data Lake Store.
Dremio provides an easy-to-use solution for Looker customers wanting to analyze data directly from NoSQL data stores,” said Keenan Rice, vice president of alliances at Looker. “Dremio’s ability to query across multiple data stores without physically consolidating the data will save customers valuable time and resources.”