The Challenge of Multiple Data Frameworks in a Multicloud Environment

ICYDK: This year’s Big Data Paris 2018 Congress and Expo was last month and I was more than lucky enough to represent SnapLogic there with a presentation. Happy to say that the winter weather didn’t deter me or the other 15,000-plus attendees from talking big data. Now in its 7th year, this year’s conference focused on how big data is disrupting business processes more than ever, along with the trends toward the democratization of data and its business uses. As big data ROI becomes more measurable, it’s become central to large organizations and their missions around fraud detection, customer satisfaction, anticipating outages or business opportunities, improving operational efficiencies, and “name your concern.” Now more than ever, big data requires investment and thoughtful leadership across any large organization.

During my presentation, “Overcoming the challenge of multiple data frameworks in a multiple cloud environment,” I talked about how a major hurdle for today’s technology leaders is the need to be able to manage several data frameworks that reside in multiple clouds all with varying standards to due multiple vendor solutions. To overcome this challenge and glean insights from these data lakes, enterprise leaders will turn and are turning to iPaaS and managed services for big data integration. https://goo.gl/zod1xi #DataIntegration #ML

Error Handling Using Try Scope in Mule 4

ICYDK: With the introduction of Mule 4, there are many new features available to use. One among them is “Try Scope.” If you have a Java background, you will be familiar with this term. In Java, a try block is used to enclose the code which has the possibility of being an exception, so that it can be handled without breaking the whole code.

Similarly, this feature is now enabled in Mule 4. Instead of creating a new flow to create specific error handling protocols for each component, we can put our component in the try block. A try scope wraps one or more event processors, then catches and handles any exceptions that might be thrown by any of these enclosed event processors. The behavior is as if you extracted those enclosed event components into a separate flow with its own error handling strategy, but inline, without having to actually define a new flow. https://goo.gl/QfDLbQ #DataIntegration #ML