A sbt plugin for generating model classes from SQL query files in src/main/sql.
- Integrate the power of SQL and Scala
- If you write an SQL, it creates a Scala class to read the SQL result.
- Type safety
- No longer need to write a code like
ResultSet.getColumn("id")etc. - Editors such as IntelliJ can show the SQL result parameter names and types.
- For example, if you rename a column name in SQL from
idtoID, the code usingidwill be shown as compilation error. Without sbt-sql, it will be a run-time exception, such asUnknown column "id"!.
- No longer need to write a code like
- Reuse your SQL as a template
- You can embed parameters in your SQL with automatically generated Scala functions.
sbt-sql version: Airframe version:
sbt-sql supports only sbt 1.8.x or higher.
project/plugins.sbt
// For Trino addSbtPlugin("org.xerial.sbt" % "sbt-sql-trino" % "(version)") // For DuckDB addSbtPlugin("org.xerial.sbt" % "sbt-sql-duckdb" % "(version)") // For SQLite (available since 0.7.0) addSbtPlugin("org.xerial.sbt" % "sbt-sql-sqlite" % "(version)") // For Treasure Data Presto addSbtPlugin("org.xerial.sbt" % "sbt-sql-td" % "(version)") // For Generic JDBC drivers addSbtPlugin("org.xerial.sbt" % "sbt-sql" % "(version)") // Add your jdbc driver dependency for checking the result schema libraryDependencies ++= Seq( // Add airframe-codec for mapping JDBC data to Scala objects "org.wvlet.airframe" %% "airframe-codec" % "(airframe version)" // Add your jdbc driver here )build.sbt
This is an example of using a custom JDBC driver:
enablePlugins(SbtSQLJDBC) // Add your JDBC driver to the dependency // For using trino-jdbc libraryDependencies ++= Seq( "org.wvlet.airframe" %% "airframe-codec" % "(airframe version)", // Necessary for mapping JDBC ResultSets to model classes "io.trino" % "trino-jdbc" % "332" ) // You can change SQL file folder. The default is src/main/sql // sqlDir := (sourceDirectory in Compile).value / "sql" // Configure your JDBC driver (e.g., using Trino JDBC) jdbcDriver := "io.trino.jdbc.TrinoDriver" jdbcURL := "(jdbc url e.g., jdbc:trino://.... )" jdbcUser := "(jdbc user name)" jdbcPassword := "(jdbc password)"sbt-sql-sqlite plugin uses src/main/sql/sqlite as the SQL file directory. Configure jdbcURL and jdbcUser properties:
enablePlugins(SbtSQLSQLite) jdbcURL := "jdbc:sqlite:(sqlite db file path)"sbt-sql-duckdb plugin uses src/main/sql/duckdb as the SQL file directory.
enablePlugins(SbtSQLDuckDB) // [optional] jdbcURL := "jdbc:duckdb:(duckdb file path)"sbt-sql-trino plugin uses src/main/sql/trino as the SQL file directory. Configure jdbcURL and jdbcUser properties:
enablePlugins(SbtSQLTrino) jdbcURL := "jdbc:trino://(your trino server address):443/(catalog name)" jdbcUser := "trino user name"To use Treasure Data, set TD_API_KEY environment variable. jdbcUser will be set to this value. src/main/sql/trino will be the SQL file directory.
Alternatively you can set TD_API_KEY in your sbt credential:
$HOME/.sbt/1.0/td.sbt
credentials += Credentials("Treasure Data", "api-presto.treasuredata.com", "(your TD API KEY)", "") enablePlugins(SbtSQLTreasureData)src/main/sql/trino/sample/nasdaq.sql
@(start:Long, end:Long) select * from sample_datasets.nasdaq where time between ${start} and ${end}From this SQL file, sbt-sql generates Scala model classes and several utility methods.
- SQL file can contain template variables
${(Scala expression)}. To define user input variables, use@(name:type, ...). sbt-sql generates a function to populate them, such asNasdaq.select(start = xxxxx, end = yyyyy). The variable can have a default value, e.g.,@(x:String="hello").
- Embed a String value
@(symbol:String) select * from sample_datasets.nasdaq where symbol = '${symbol}'- Embed an input table name as a variable with the default value
sample_datasets.nasdaq:
@(table:SQL="sample_datasets.nasdaq") select * from ${table}SQL type can be used for embedding an SQL expression as a String.
You can use your own type for populating SQL templates by importing your class as follows:
@import your.own.class To generate case classes with Option[X] parameters, add __optional suffix to the target column names.
select a as a__optional // Option[X] will be used from ... @optional(a, b) select a, b, c // Option[A], Option[B], C target/src_managed/main/sample/nasdaq.scala
package sample object nasdaq { def path : String = "/sample/nasdaq.sql" def sql(start:Long, end:Long) : String = { s""""select * from sample_dataest.nasdaq where time between ${start} and ${end} """ } def select(start:Long, end:Long)(implicit conn:java.sql.Connection): Seq[nasdaq] = ... def selectWith(sql:String)(implicit conn:java.sql.Connection): Seq[nasdaq] = ... } case class nasdaq( symbol: String, open: Double, volume: Long, high: Double, low: Double, close: Double, time: Long )