Easier Data Marts with DreamFactory Data Mesh

Today’s IT teams are struggling to make sense of organizational data that has been compiled piecemeal and often stored within disparate storage solutions. Often this information needs to be aggregated and presented in a unified format, yet pulling data from multiple data sources and displaying it in a coherent way can be onerous and error-prone. The challenge is compounded when the data resides in different databases, and possibly within different clouds. To remedy this, companies often embark upon costly and time consuming data lake, data mart, and data warehouse projects. In many cases though, the IT team is simply looking for an effective solution to combine data within a single unified interface! In this tutorial I’ll introduce you to a powerful and very popular feature of the DreamFactory platform called Data Mesh. Using Data Mesh you can create virtual relationships between two databases much in the same way you can create foreign key relationships between two database tables. We’ll walk through an example in which a MySQL database running on Amazon RDS is meshed with an IBM DB2 database running on IBM Cloud, merging the data together so it can be retrieved via a single API endpoint.

Configuring the MySQL and IBM DB2 APIs

For the purposes of this tutorial I’ll assume you’re already familiar with DreamFactory fundamentals, including how to generate database-backed REST APIs. If not, I suggest taking a few minutes to watch our introductory video at https://academy.dreamfactory.com.

Configuring Data Mesh

After generating your APIs, enter DreamFactory’s Schema component and select the API and table that will serve as the relationship parent. In the following screenshot I’ve chosen the MySQL API and the employees table:
Schema
Once selected, you can scroll down to the table’s “Relationships” section. This section warrants a bit of explanation. When DreamFactory generates a database API, it analyzes all tables, stored procedures, views, table columns and datatypes, and foreign key relationships. This section contains a list of join aliases that you can use to easily join tables via the API:
Relationships
However you’re not limited to these aliases; by clicking the Add Virtual Relationship button you can create new relationships where they didn’t previously exist, including relationships between two databases. Click on the Add Virtual Relationship button and you’ll be presented with an interface for defining the relationship between two databases. See the following screenshot:
Defining the Datamesh
In this screenshot, I’ve defined the fields as follows:

* Always Fetch: This field enables the virtual relationship. You can also optionally enable the relationship on demand via the API.
  • Type: This field determines the relationship type. You can choose from Belongs To, Has One, Has Many, and Many to Many.

  • Reference Service: This field identifies the related service. It’s set to DB2 because the relationship pertains to the previously configured IBM DB2 API.

  • Reference Table: This field identifies the related table. Recall we selected the MySQL service’s employees table, so we’re going to relate the employees table to the DB2 service’s employee_status table.

  • Reference Field: This field identifies the foreign key field found in the related employee_status table.

After defining these fields, save the changes and you’re ready to begin using the new relationship!

Querying the Relationship

Now that the relationship has been defined, let’s execute a query and view the combined results. We’ll begin by showing what a query to the employees table looks like prior to configuring Data Mesh:
MySQL results
After querying Data Mesh, the results look like this:
MySQL and DB2 combined results

Conclusion

DreamFactory’s Data Mesh feature offers an incredibly straightforward, point-and-click solution for creating sophisticated and transparent unified queries. You’re certainly not limited to meshing two databases together; try meshing two, three, or more databases together and marvel over the time and aggravation savings!

API Integration – What is it? The Definitive Guide

 

Modern business runs on software. This involves storing business data, and moving that data from place to place. In the old days, software stored the data in a siloed fashion. This meant it was only available in one place, cutoff from the rest of the world. Companies struggled to pull data from different locations. Combining it in meaningful ways was difficult. This isolated data only provided a fraction of its potential value. In modern applications, the data wants to be free. With the onset of the web, data is available everywhere. Sharing data has had an exponential effect on the power of software. Applications share data using some form of API, or Application Programming Interface. With the explosion of available APIs, it has become more difficult to manage it all. API Integration allows you to combine, track, and add new APIs.
 

History

 
In the beginning, there was the database. It stored vast quantities of information. It allowed retrieval and searching of the information at scales never before seen. What was once stored in ledgers and paper was now electronic. This pleased the Business immensely – never before were so many questions answerable! Yet, there were clouds on the horizon. As the software industry matured, people invented new types of applications. And with them, new databases. Different and separate databases. Over time, the need to combine the data became clear. But chasms existed everywhere. All these applications stored their own data, in their own locations. Business had unintentionally imprisoned the data!To free the data, systems needed to communicate with one another. In the late 1970’s, Remote Procedure Call (RPC) systems began to arise. RPC provided a common interface and data format. This allowed computers to communicate with each other. By today’s standards, this architecture required a great deal of expertise to create. It involved complicated coding, was brittle, and expensive. But it worked. The imprisoned data could begin to stretch its legs.Over time, RPC implementations became more robust and standardized. From the late 1980’s to early 1990’s, standards gained prominence. CORBA and Microsoft’s DCOM were two examples. Standardization made it easier to write code that could communicate. Different teams could work independently, and combine systems. Better transport mechanisms and message reliability made things work better. Still, by today’s standards, these technologies were still difficult and expensive. A famous paper highlighted pain points the developers were feeling. The industry slogged on, continuing to try to find a solution.In the 1990’s, the World Wide Web (WWW) begin gaining traction outside of academia. In the late 90’s developers begin using Hypertext Transfer Protocol (HTTP) to communicate. This widespread standard solved the problem of “how do we send message from point A to point B”. Popular operating systems such as Windows and Linux supported it natively. HTTP used the networking port 80. Because of all the web browsing, it was open by default on firewalls. However, disagreements about what form the data should take still rumbled.At first, the software industry embraced two frameworks. Web Services Description Language (WSDL) defined the format of the data and services. Simple Object Access Protocol (SOAP) described how to send it. The combination of these two specification laid out “rules” of communication over HTTP. While a good idea in theory, they quickly became overwhelming. They were difficult to maintain, and painful to implement. Too much of a good thing, you might say.Finally, in 2000 Roy Fielding published a groundbreaking paper. In it he describing the Representational State Transfer (REST) architecture. This approach provides a simple mechanism of exposing:
 
  1. Resources that represent something in your system (orders, customers, accounts, etc)
 
  1. Actions against them, represented using existing HTTP verbs (GET, POST, PUT, DELETE).
 
This simple representation of business entities and operations works very well. With it, REST eliminates much of the complexity and overhead of earlier approaches. No more struggling with definitions, schemas, and standards. A consumer takes actions against resources. Very straightforward.With these recent advancements, API creation is finally within reach of almost anyone. In the years since, we have seen an explosion in the number of available APIs. APIs serving almost any functional area exist for consumption. According to ProgrammableWeb, there are over 20,000 publicly available APIs. This rapid growth has created an ecosystem of API directories and management solutions.
 

Integration

 
With APIs so prevalent in the industry, API integration has become super important. There is an ocean of data, and thousands of places to pull it from. An effective application leverages different APIs to maximize their power. This can include:
 
  • Internal data belonging to the business, both current and historical.
 
  • The data can “live” in modern systems that already feature APIs.
 
  • The data can exist buried and hidden in legacy databases. Important historical trends and details becomes available.
 
  • The data can also come from external sources. This can include:
 
  • Real-time data about financial markets.
 
  • Information about the weather in various geo-locations.
 
  • Traffic data from cities, highways, and rail lines.
 
  • Births and deaths. Marriages and divorces.
 
Modern applications can search, filter and combine this data. They can access multiple data sources at a time. The application’s utility and power grows. No longer is a firm required to build out massive data sets themselves. Once someone collects the data and publishes the data, the cost is never incurred again. For example, look at Google Maps API. The cost of mapping every road, location, and border in the world is astronomical. But now that it’s done, we don’t need to do it again.With API integration, knowing what already exists can accelerate solving a business problem. Oftentimes, there is no need to re-invent the wheel – simply pull the data from an existing API. API directories can assist with this discovery. Like a phone book, they allow browsing for the right service. They provide thorough documentation and tutorials. Client libraries allow rapid consumption of APIs. Discussion boards offer a community of help to aid in getting started. With these available tools, developer stand on the shoulders of giants.On the flip side, a business can choose to expose their own valuable information in a public API. By hosting their API, a business can provide their data to others. This can drive traffic to their site, and help build their reputation. If the information is valuable enough, the company can charge for access to the API. Suddenly, what was once a cost can become a revenue stream! Plugging their API into an API directory advertises their business to other developers. These actions can have a multiplier effect on the value of the data. Consumers can feed back into the system, improving the data.
 

DreamFactory

 
If you’re sold on the idea of using APIs to power your business, what’s next? If your business sits upon a large number of separate data stores, the chore can seem daunting. The cost of developing an API layer for each system could cost thousands of developer hours. Time and effort better spent elsewhere. Money better spent elsewhere. Wouldn’t it be nice to generate these API layers ? Some sort of code generation mechanism that made all that data available in a modern REST API?Friends, that day is upon us! The good folks at DreamFactory have built a system that does that. Start with some simple configuration. Then, with a few clicks of a mouse, DreamFactory generates powerful APIs. Feed it your database connections and legacy SOAP APIs, and out comes a robust REST API. Complete with sparkly documentation. It looks like a team of developers took months to create. And is available to you in minutes.Watch this video, and prepare to breath a sigh of relief. Slash your budget estimates. DreamFactory has lifted a large cost of software development off your plate!

MS SQL Server vs MySQL – Which Reigns Supreme?

RDBMS databases form the backbone of most business software solutions. When people discuss SQL (Structured Query Language), it’s in reference to an RDBMS system. Applications store all their important data there. The databases (usually) power all the searches. A good database can bring a system to a higher level. A bad database can bring a business to its knees. For any developer or enterprise embarking on a new software venture, one big question is "which database vendor should I use?". In the early days of computing, database vendors such as IBM and Oracle reigned supreme. That has changed in recent years. MySQL (open source solution recently purchased by Oracle) and Microsoft’s SQL Server have gained market share. According to a 2018 StackOverflow survey, they hold the top two rankings in SQL database usage. But which one is best for YOUR business? MySQL vs SQL Server presents a tough and complicated decision!
Mysql Vs Sql Server

Which to Choose?

To select the best database solution for the task at hand, one must weigh several factors, including:
  • Operating System
  • Cost
  • Cloud Support
  • Performance
  • Tool Support
This article will compare and contrast these decision points for MySQL and SQL Server. Armed with these details, hopefully you will be better positioned to make this important decision.

Operating System

Most companies have already invested time, money, and expertise in their computing infrastructure. This includes their choice of Operating System (OS). Usually that consists of "Windows vs Linux" (although cloud computing is beginning to change that). When selecting a database to power your business, the OS your company is already is a big deciding factor. Here’s how that looks for MySQL vs SQL Server:

MySQL

MySQL runs on virtually all major operating systems, including Linux, MacOS, and Windows. While traditionally associated with Linux (as part of the famed LAMP stack), it will run on Windows as well.

SQL Server

SQL Server was originally written for the Microsoft Windows operating system. In recent years, Microsoft has made strides in embracing the open source community, and providing support for both Linux and Mac OS. The most recent versions of SQL Server run on Linux natively, and will run on Mac OS within a Docker container.

Advantage – It Depends

Honestly, this one depends on what OS your company is already using. While both platforms support the two major operating systems, there are "home court advantages" to each. If you’re already a Windows and .Net shop, it probably makes sense to use SQL Server. If you’re a Linux and Python/Java/PHP shop, MySQL might be the better choice.

Cost

Cost is always a factor when making decisions about software, and an enterprise-grade database can be one of the biggest expenses. Both solutions offer a "free" tier. From there, the price depends on how powerful a database you need, and what sort of support you’re looking for. It may be tempting to try and save money, and go for the free tier. But if the database is mission critical, paying for advanced monitoring, backup, and support is probably worth the cost. Here’s the breakdown:

MySQL

MySQL’s free offering is the MySQL Community Edition. It boasts a decent number of the standard features. This would work fine for a developer learning the platform. It should also meet a smaller system’s needs. For a more complete feature set (as well as Oracle support), you need to shell out some bucks. According to recent pricing, this can run you anywhere from $2k-$10k per server, annually. There are 3 different tiers (Standard Edition, Enterprise Edition, and Cluster CGE). Choosing between them largely depends on the complexity and scale of your data needs.

SQL Server

SQL Server’s free offering comes in two flavors – here’s how Microsoft describes them:
  • Developer – "Full-featured version of SQL Server software that allows developers to cost-effectively build, test, and demonstrate applications based on SQL Server software."
  • Express – "Free entry-level database that’s ideal for learning, as well as building desktop and small server data-driven applications of up to 10 GB."
In a nutshell, Developer edition gives you everything you need, as long as you’re not using it in production. Express has a smaller feature set, but it’s license allows for production use. Like MySQL, if you’re business needs and scale are smaller, Express may do the trick. If you need a more robust feature-set, you’re going to have to pay for it. According to Microsoft’s pricing page, you can pay anywhere from $931 to $14,256 per core. There is a wide discrepancy in pricing here, and your business needs will dictate how much power you need.

Advantage – It Depends

Once again, the best choice here depends on the needs of your business. Both solutions offer a free tier. Both have complicated pricing schemes beyond that. Consult with the sales department of each to get a final determination of what you need, and what you would end up paying.

Cloud Support

In recent years the computing landscape has undergone a dramatic transformation. Cloud computing is all the rage. The "Big 3" providers are currently Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. Each offer robust services, such as storage, computing, and yes, SQL Databases. This revolution has impacted the first two bullet points of this article (OS and Cost). The cloud provider manages the OS and server complications, and offer "pay as you go" plans to avoid the major up-front costs. In a way, this shift has diminished the importance of OS/Cost. Instead, other considerations such as performance, tool support, feature set are bigger factors. Here’s how the offerings stack up:

MySQL

All 3 of the "Big 3" cloud providers support MySQL with the following offerings: Each service claims easy administration, high scalability, robust security, and pay-as-you-go pricing. This article offers an in-depth comparison of MySQL offerings across cloud providers. It does not attempt to compare pricing due to differences between the providers. It seems cloud pricing also falls into the "it depends" category – there is no "one size fits all" answer. The best approach might be to first create MySQL environments in several clouds. Then, load test typical usage for your business operations. and determine how the different costs shake out. It is worth noting that Oracle (owner of MySQL) ALSO features a cloud offering for MySQL. This might be worth exploring due to Oracle’s "native" support of MySQL. However, a SQL database is only one piece of a software architecture landscape. A system still needs storage, computing, and security services. Oracle’s not currently a market leader for providing these services. For that reason, Oracle’s cloud may be a risky choice for hosting MySQL. It is ALSO worth noting that all the cloud providers also offer Virtual Machine services, upon which you can run your own MySQL instances. This is an option for customers that want more control over their databases. This approach requires more expertise (and is more expensive).

SQL Server

Similar to MySQL, each major cloud provider has a SQL Server offering:
  • AWS offers Sql Server on their Relational Database Service.
  • Azure offers SQL Server on their SQL Database service. While SQL Server runs under the covers, the SQL Database offering abstracts much of the server administration away from the end user.
  • Google offers SQL Server on their Google Cloud Platform offering.
An interesting twist here is that one of the major cloud providers (Microsoft) is also the creator of SQL Server. While all 3 providers offer strong choices, there’s a sense of a "home-court advantage" with Microsoft. Like MySQL, you could also pay to host Windows VMs in the cloud, and self-host SQL Server. This also comes with the same expertise requirements and additional cost concerns.

Advantage – SQL Server (SQL Database)

While either solution works as a cloud offering, the combination of Microsoft Azure and SQL Database is hard to beat. If you are ALREADY using another provider, or have ALREADY invested in MySQL, then that would still probably be your choice. However, coming into a green-field decision, the Azure/SQL Database choice is pretty compelling.

Performance

Database performance is crucial to any software application. If the database doesn’t respond in an expedient fashion, the entire system bogs down. This leads to issues like poor user experience, delays in operations, and lost money. Database performance depends on an IMMENSE number of variables. Slight differences in workloads can skew advantages one way or another. Minor tweaks can improve results. A well-designed database is worth its weight in gold. MySQL and SQL Server both tout extensive performance and scaling capabilities. After scouring the web for comparisons between the two, SQL Server seems to have the advantage. Here are some hard numbers: An additional consideration is MySQL is Oracle’s "entry level" database. For high performance needs, Oracle would steer you towards their flagship database offering. On the other hand, SQL Server IS Microsoft’s flagship offering.

Advantage – SQL Server

While not a slam dunk, SQL Server’s slightly better numbers, and "flagship" status give it a slight advantage here.

Tool Support

In order to work with a database, one needs a good toolset. The database itself is a background process without a GUI. However, in order to develop and support the database, you need to interact with it. Both MySQL and SQL Server provide front end clients for this purpose.

MySQL

MySQL’s client application is MySQL Workbench. Workbench has offerings that run on Windows, Linux, and MacOS. It offers several important database management tools, including:
  • Database connection and management
  • SQL editor and execution
  • Database and Schema modeling GUI
  • Performance monitoring and query statistics

SQL Server

SQL Server’s client application is SQL Server Management Studio (SSMS). While SQL Server runs on Windows, Linux and MacOS (via Docker), SSMS is ONLY available on Windows machines. Note that Microsoft provides a Visual Studio Code extension to execute SQL from a Linux-based machine. SSMS has a more robust feature set than MySQL Workbench. This includes:
  • More extensive Database management tools. Includes a robust set of security, reporting, analysis, and mail services.
  • A powerful execution plan visualizer. This allows easy and fast identification of performance bottlenecks.
  • Integrated Source control.
  • Real-time activity monitor with filtering and automatic refresh.

Advantage – SQL Server

Both offerings provide "the basics" (ability to execute SQL and view/manage databases), but the SSMS experience is far superior. Seasoned Database Administrators (DBAs) may wish to manage their databases with scripts and SQL. But many users want a simple GUI to perform these tasks. This is an area where SSMS shines. Also, the execution plan visualizer makes performance bottlenecks easy to fix. That can pay for itself time and time again.

Language Support

Both platforms utilize SQL to interact with their schema and data (with some minor differences). However, they differ when it comes to runtime languages interfacing WITH the database. For example, in a typical server architecture, you might have:
  • Database – SQL reads/writes data
  • App Server – C++/PHP/Perl/Python/.Net/Java provide business logic, and interface with database
Here’s some of the differences to consider between the two systems:
  • SQL Server supports T-SQL, a proprietary extension to SQL. This enables concepts such as Procedural Programming, local variables, string/data processing functions, and FROM clauses in UPDATE/DELETE statements. Basically, you can do more with your SQL.
  • Runtime languages – both systems support connecting using the major programming languages (C#, Java, PHP, C++, Python, Ruby, Visual Basic, Delphi, Go, R). There are some articles on the web claiming that less-popular languages such as Eiffel are only supported on MySQL, but as long as you can connect using ODBC, both databases are available.
  • If using a .Net language (C#, F#, Visual Basic, etc), once again Microsoft provides a "homecourt advantage". Microsoft wrote the ADO.Net library for SQL Server first. ADO.Net works with MySQL, but it really shines with SQL Server.
  • SQL Server also provides the additional (and possibly controversial) mechanism of invoking .Net code FROM a stored procedure. This can be a powerful mechanism for injecting all sorts of functionality within your database. It also allows you to shoot yourself in the foot. Proceed with caution here.

So Which To Choose?

Obviously there is a great deal of information to unpack here. The "it depends" caveat still looms largely over the entire decision process. A general rule of thumb for approaching this decision might be:
  • If you are a Linux shop, already using pieces of the LAMP stack, MySQL would fit in with that nicely.
  • If you are a Microsoft shop, already invested in .Net and the Windows ecosystem, SQLServer seems like the obvious choice.
  • If you are completely green field, or looking to make a clean start, the evidence above leans towards SQL Server. Microsoft is building momentum in the cloud arena with Azure’s SQL Database. They are continuing to embrace other ecosystems (eg, Linux) and open source. And SQL Server features a better toolset, the more robust TSQL, and arguably better performance.

DreamFactory

With your Database decision made, what’s next? Most business applications consist of choices around the following rough architecture: Wouldn’t it be nice to knock out work in the middle layer automatically? Some sort of code generation mechanism that made all the database information instantly available for a GUI to consume? Friends, that day is upon us! The good folks at DreamFactory have built a system that does just that. With some configuration, and a few clicks of a mouse, DreamFactory will turn your database objects into a REST API. They support all sorts of databases (MySQL, SQL Server, and a long list of others). They even auto generate the documentation. Watch this video, and prepare to breath a sigh of relief. DreamFactory just removed a big piece of heavy lifting off your plate!

Creating a Geocoder Service Using DreamFactory and the Google Maps Geocoding API

There is no question DreamFactory’s native connectors have saved IT teams countless hours of development time. Yet these are almost incidental when one takes into consideration the platform’s ability to integrate with thousands of third-party REST and SOAP services, not to mention create entirely new APIs through the scripted service interface. Further, thanks to DreamFactory’s ability to leverage third-party libraries, new APIs can often be created in just a few dozen lines of code. In this tutorial you’ll learn how to create a DreamFactory-managed geocoding service using Spatie’s popular open source geocoder package. Using this service, you’ll be able to easily convert addresses into latitudinal and longitudinal coordinates, as well as perform reverse geocoding (convert coordinates into an address). Once complete, you’ll be able to use a JavaScript library such as Axios or a PHP library like Guzzle to add geocoding to your application with no additional coding required! Continue reading “Creating a Geocoder Service Using DreamFactory and the Google Maps Geocoding API”

DreamFactory and GitHub – Event Scripting Bliss

DreamFactory’s Event and Platform Scripting

The DreamFactory scripting objects enable you to create a custom delivery of data through a unified URL structure. Ok, this is a pretty well-known fact if you have spent some time digging around DreamFactory. DreamFactory allows you to link your event scripts and custom scripts to a file that is managed in your GitHub account. This eliminates the need to manually update your DreamFactory scripts when you update those scripts in your source control repo.

Continue reading “DreamFactory and GitHub – Event Scripting Bliss”

Extend Your Twitter Integration With Third-Party Libraries

DreamFactory and Twitter
DreamFactory and Twitter

Integrating Third-Party Libraries With DreamFactory Scripting

Being able to script business logic into your endpoints to further extend and customize the data response(s) and request(s) is one of the best features inside of DreamFactory. As a form of ETL data integration, DreamFactory can help you create a data lake or data warehouse, especially when combined with a service such as Xplenty. Sometimes cases arise where you need to take advantage of the robust third-party library community to help extend your API. In the example below, we are going to use a third-party library to help us connect our DreamFactory instance to a Twitter Account to post updates. We also bring in the conecpt of connecting the Twitter service to our Rapsberry Pi IoT device to post temperature information to show you how can connect multiple services through DreamFactory to manage your data needs in a simple, robust method that makes API mangement a snap.

Continue reading “Extend Your Twitter Integration With Third-Party Libraries”

How I was port-out scammed in 3 hours

Tokyo, midday on a Friday. My phone kept buzzing during my customer meeting. The day was a workday, a day of travel and the day a personal property purchase was due to settle. Little did I know it was also the day I was the target of a “port-out” scam.

In the DreamFactory Tokyo office in Azabu Juban, Minato, in the few minutes I had between one meeting and the next, I glanced at my phone trying to keep pace with solicitor updates regarding my property transaction finalizing. Amidst the deluge of notifications my phone was providing, one thing I quickly dismissed were two verification SMS codes sent by my telco provider that I hadn’t asked for and didn’t give a second thought to:

Continue reading “How I was port-out scammed in 3 hours”

DreamFactory News – Training, a New Guide, SaaS, and More

DreamFactory Jeanie
Our New Mascot. Say Hello to Jeanie. Stay Tuned For More

DreamFactory Happenings

It’s hard to believe the year’s end is almost upon us! This has been a pretty transformative year for the company. We’ve seen record demand for the DreamFactory Platform, and have additionally been working around the clock on a number of new initiatives:

Continue reading “DreamFactory News – Training, a New Guide, SaaS, and More”

Improved Data Security with MySQL Privileges and DreamFactory

DreamFactory and MySQL

All MySQL installations naturally include a root account and offer the ability to create restricted user accounts. However, otherwise sane developers will often use these root accounts for application-level communication, dramatically raising the likelihood of data theft, data exfiltration, and other security issues. For that reason the DreamFactory team always recommends users take care to create restricted MySQL users before using the platform to generate APIs.

In this tutorial, you’ll learn how to create a non-root MySQL user and then further restrict this user’s privileges to a specific database and even table subset. You’ll also learn how to subsequently revoke a user’s privileges to reflect changing requirements.

Continue reading “Improved Data Security with MySQL Privileges and DreamFactory”

Learning About The Bitnami System Database

Database Code Lines

The Elusive Bitnami System Database

If you want to spin up a fast API solution, DreamFactory is a great way to do that with a Bitnami install. Within minutes you can have a fully documented and secure REST API to utilize. Just like any program bundle, there are lots of features to learn and interact with.  Outside of a Docker Swarm or AWS ELB setup, it is pretty hard to find a way to spin up a DreamFactory instance faster. We are going to dive in a bit further to find out how to interact with the system database. Continue reading “Learning About The Bitnami System Database”