
Blog
Dialects for several database vendors
In a nutshell, we accept open-source contributions to the database dialects so several vendors can be supported. The project is open-source on BitBucket. Contributors can extend the support which is currently only for PostgreSQL, and we may see support for MySQL, Oracle and SQL Server. Contributors can even upload their dialect directly on the Atlassian Marketplace (free or paid add-ons).
Contributions are accepted
The library is open-sourced on BitBucket: https://bitbucket.org/aragot/play-sql-dialects
The library is automatically bundled in every version of Play SQL, so pull requests will be integrated in the next version. However, you don’t need to wait for us. You can just compile your own Dialects and install them in Confluence. All dialects of your plugin will replace the default dialects.
The license is Apache Public License 2.0. It is possible to derive the plugin and publish it on the Atlassian Marketplace under your own name, provided you respect the conditions of the license. If you provide and maintain valuable support for another database vendor, you can even ask customers to pay you.
Obviously this is not an announcement that the full product “Play SQL Spreadsheets” is open-sourced. It remains a commercial product, but we’ve open-sourced the core library.
What it means for MySQL and Oracle users
Depending on your contributions, it may happen that MySQL and Oracle users get much better support for custom types, autocomplete and monitoring. All it takes is someone to write the queries which returns the list of columns and their types.
Beware that only Postgres supports Transactional DDL. It means only Postgres is able to rollback a transaction which contains structural changes to tables, like adding a column or changing its type. This is unfortunate, because full Spreadsheet features rely a lot of transactions. Therefore I doubt any contributor will engage in supporting read-write access for other vendors than Postgres. Contributions will certainly be focused on accessing the schema’s metadata.
Using Play SQL Dialects for other purposes
I needed dialects for Play SQL. I’ve make them open-source so you can use it for your own commercial product. I’ve created Play SQL Dialects because I couldn’t find an open-source library which would allow manipulating the database structure in Java:
- Hibernate maps tables with Java beans, but it doesn’t publish an API to add/remove columns.
- LiquidBase allows client to describe a schema using XML and upgrade the database accordingly, but it still doesn’t have a Java API to create/drop columns.
- This StackOverflow thread lists all Java libraries for SQL DDL manipulation. Please add your own library if you know any. DDL in SQL stands for “Data Definition Language”, it’s made of commands like “CREATE TABLE”. It’s the opposite of DML, “Data Manipulation Language” like “SELECT * FROM …”.
It’s open-source. You are free to reuse it for your own projects. Few people know it, but it’s even possible to charge for delivering open-source assets. If you do succeed to reuse it, that means I’ve made it generic enough!
How to work with the dialect plugin?
This information is available on the Play SQL Documentation – Dialects.
Here are the steps to compile and install a dialect:
- Clone the BitBucket repository: git clone https://bitbucket.org/aragot/play-sql-dialects.git,
- Install the Atlassian SDK,
- Start Confluence (execute the command “mvn amps:debug” in the playsql-dialects-plugin directory)
- Install the plugin (execute the command “mvn amps:cli” in the same directory, then type “pi” every time it needs to be installed)
Throughout the implementation, you may regularly check whether your developments work in Confluence.
How to implement a new dialect?
Two files are necessary to create a dialect:
- dialect-postgres.xml: An XML file containing all the queries. It inherits from dialect-generic.xml, so you only need to specify queries which are different from the default ones.
- PostgrsDDLDialect.java: A Java file which implements the methods of the API.
Let’s look at the annotations on the Java file:
We discover:
- There’s a reference to the XML file which contains the queries. The class inherits from GenericDDLDialect, which is annotated with its own XML file, so that only overrides must be implemented.
- There is a list of capabilities: “readWrite”, “autocomplete”, “monitoring”. Those capabilities are visible when installing the dialect. They allow Play SQL to skip some UI features if the capabilities aren’t present.
- It provides a few data about the driver (its name, the minimum url, etc) which are only used to help the user. For example the “minimumUrlString” is used when the user switches between two vendors. If the url doesn’t at least contain that string, it is replaced by a suitable example for Postgres.
The class also implements DDLDialect. Basically the API definition is split in several interfaces which each have a theme:
If you intend to support all capabilities, you will implement all methods of all interfaces. However, GenericDDLDialect provides many default implementations which are very suitable, so maybe you can just inherit this class and write the correct query in dialect-postgres.xml.
Data types
Databases support different types of data. One part of the dialects is about mapping vendor-specific types to a dozen types in Play SQL. The other vendor specific parts of the dialects are often:
- Manage tiny differences in SQL. For example MySQL and Oracle quote column names using a backtick (`), whereas the standard says column names should use the double quote (“).
- Import the driver. It can be found either in the /lib directory of Tomcat (for JNDI datasources), or in the Confluence source (for JDBC datasources) or embedded in the plugin (if you choose to provide your own driver).
It all depends on the caprices of your database’s syntax for SQL.
How to test the dialect?
Apart from testing in Confluence, the short path is to use the JUnit test of DialectTest.java:
- Inherit DialectTest.java using the name of your dialect (“PostgresDialectTest.java”),
- Implement applyVendorSpecificInitialisation(). The method initializes the dialect, opens a database connection and initializes the database with testing data.
As you implement features for your dialect, the JUnit methods testReadCapability(), testAutocompleteCapability(), testReadWriteCapability(), testEntities(), testFormulaDependencies(), testFormulaDependencyCascades(), testFormulaDependencyInfiniteRecursion() will test your dialects. As acute devs have noticed, there are many more tests for formulas, and that it because formulas were the first thing created after adding the capabilities/testing feature. It also means there is seldom coverage for the other capabilities. As you implement capabilities, it would be great to add tests to compare the behaviour of Postgres vs your dialect.
What’s next?
While I don’t expect an immediate crowd of developers to show up, it is fairly easy for a developer to add capabilities. I expect to see some contributions from time to time, and maybe projects which are unrelated to the Atlassian ecosystem will need the dialects and help maintaining them.
