Skip to main content

The Select Operation

The select operation is typically modelled to return either:

  1. Single record
  2. Collection of 0 or more records
  3. Error (like no record found)

I think we can probably agree that from the JDBC perspsctive, handling these three concerns seperately increases your vectors of failure. A more interesting proposition would be to merge (1) and (2), and then eliminate (3) by making it an attribute of the result

interface<T> SelectResult {  Collection<T> result();  String error();}

The select function pseudocode would then be reduced to something like this:

select operation
function Collection<T> selectOperation(String query, Object[] paramaters){  try(Connection conn = borrowConnection()){    try(PreparedStatement ps = conn.prepareStatement(query)){      for(int i = 0; i < parameters.length; i++){        ps.setObject(i + 1, parameters[i])      }
      Collection<T> collection = newCollection()      try(ResultSet rs = ps.executeQuery()){        ResultSetMetaData rsmd = rs.getMetaData()        while(rs.next()){          T data  = newDataInstance()          int numberOfColumns = rsmd.getColumnCount();          for(int i = 0; i < numberOfColumns; i++){            String attributeName = resolveAttributeName(rsmd.getColumnName(i))            Class<?> attributeType = resolveAttributeType(attributeName)            data.set(attributeName, rs.getObject(i, attributeType))          }          collection.add(data)        }        return SelectResult.success(collection)      }      catch(Exception e){        return SelectResult.failure(e)      }    }    catch(Exception e){      return SelectResult.failure(e)    }  }  catch(Exception e){    return SelectResult.failure(e)  }}

This pseudocode looks simply enough, but it is not quite that simple. Why so? The short answer is that relationships make things complicated. For the longer answer, stay with me. For a more concrete discussion, I will use a Task entity to illustrate the concepts.

class Task {  UUID id()  String name()  Boolean completed()  LocalDate dateCreated()}

The select handler has two dynamic parts, while the rest of the code is much like the guard rails around it. The complexity of the select operation (for this iteration) can be reduced to considering these dynamic parts

//the query parametersfor(int i = 0; i < parameters.length; i++){  ps.setObject(i + 1, parameters[i])}
//and 
//the result set rowsResultSetMetaData rsmd = rs.getMetaData()int numberOfColumns = rsmd.getColumnCount();while(rs.next()){  T data  = entityInstance()  for(int i = 0; i < numberOfColumns; i++){    String attributeName = resolveAttributeName(rsmd.getColumnName(i))    Class<?> attributeType = resolveAttributeType(attributeName)    data.set(attribute, rs.getObject(i, attributeType))  }  collection.add(data)}

Let's look at the different ways relational tables can be modelled to store the Task data

Entity with no relationships#

Visit Github Resources

create table if not exists tbl_task (  id UUID default random_uuid(),  name varchar(50) not null,  done boolean not null default false,  task_created timestamp not null default now(),  constraint task_pk primary key(id),  constraint uniq_name unique (name));

To map the relational record to the object equivalent, the mapping is pretty straight forward

Table ColumnObject Attribute
idid
namename
donecompleted
task_createddateCreated

Looking at the ResultSet, there needs to be a way to create the data entity, and also a way to set its values

T data  = entityInstance() //create new instance
//and
data.set(rsmd.getColumnName(i), rs.getObject(i))  //set property value

This can be modelled by introducing an Entity interface that would require the concrete instance to implement these concerns.

interface Entity {  EntityMetadata metadata();
  <T> void set(String attribute, T value);}

Since the EntityMetadata describes an entity, and is not a property, it provides a convenient place to tuck in the entityInstance() method.

public abstract class EntityMetadata {
    final String tableName;    final Map<String, ColumnInfo> columns;
    public EntityMetadata(String tableName, Map<String, ColumnInfo> columns) {        this.tableName = tableName;        this.columns = columns;    }
    public abstract <T extends Entity> T entityInstance();}

And the Task class would now look like

class Task implements Entity {
  UUID id;  String name;  Boolean completed;  LocalDate dateCreated;
  @Override  public EntityMetadata metadata() {      return Task.metadata;  }
  <T> void set(String attribute, T value){    switch(attribute){      case "id":        this.id = (UUID) value //use setter if you have one        break      case "name":        this.name = (String) value  //use setter if you have one        break      case "completed":        this.completed = (Boolean) value  //use setter if you have one        break      case "dateCreated":        this.dateCreated = (LocalDate) value //use setter if you have one        break      default:        warning("what the heck are you doing?")    }  }}

At this point, we need to examine two other functions:

String attributeName = resolveAttributeName(rsmd.getColumnName(i))
//and
Class<?> attributeType = resolveAttributeType(attributeName)

It seems that in order to avoid resorting to reflection, we need to have some prior knowledge of this type information. For now, let's assume we have it already in a mapping table.

Table ColumnObject AttributeAttribute Type
ididUUID
namenameString
donecompletedBoolean
task_createddateCreatedLocalDate

For the mapping, we can assume to have an object storing this information, as illustrated below

class EntityMetadata {
    final String tableName;    final Map<String, ColumnInfo> columns;
    public EntityMetadata(String tableName, Map<String, ColumnInfo> columns) {        this.tableName = tableName;        this.columns = columns;    }}
//and
class ColumnInfo {  String columnName;  String attributeName;  Class<?> attributeType;}

The Task can then have a static method to hold the metadata about itself

static EntityMetadata metadata = new EntityMetadata("tbl_task", Map.of(  "id", new ColumnInfo("id", "id", UUID.class),  "name", new ColumnInfo("name", "name", String.class),  "done", new ColumnInfo("done", "completed", Boolean.class),  "task_created", new ColumnInfo("task_created", "dateCreated", LocalDate.class))) {    @Override    public Task entityInstance() {        return new Task();    }};

And the EntityMedata would provide the perfect place to add the resolving functions

public String resolveAttributeName(String columnName){    if(columns.containsKey(columnName)){        return columns.get(columnName).attributeName;    }    throw new EntityMappingException(String.format("column %s has no associated mapping in the metadata", columnName));}
public Class<?> resolveAttributeType(String attributeName){    Optional<ColumnInfo> optional = columns.values().stream().filter(info -> info.attributeName.equals(attributeName))            .findFirst();    if(optional.isPresent()) {        return optional.get().attributeType;    }    throw new EntityMappingException(String.format("attribute %s not found in the metadata", attributeName));}

With the modelling we have so far, I think we have sufficiently covered the first scenario, and the actual source code will look someting like this:

create new table
public static void createTaskTable() {    String sql = "create table if not exists tbl_task (\n" +            "  id UUID default random_uuid(),\n" +            "  name varchar(50) not null,\n" +            "  done boolean not null default false,\n" +            "  task_created timestamp not null default now(),\n" +            "  constraint task_pk primary key(id),\n" +            "  constraint uniq_name unique (name)\n" +            ");";    try (Connection conn = DriverManager.getConnection(connectionString);          Statement stmt = conn.createStatement()) {        stmt.execute(sql);    } catch (SQLException e) {        System.out.println(e.getMessage());    }}
public class SelectResult<T> {
    Collection<T> result;    String error;
    public SelectResult(Collection<T> result, String error) {        this.result = result;        this.error = error;    }
    public static <T> SelectResult<T> success(Collection<T> result){        return new SelectResult<>(result, null);    }
    public static <T> SelectResult<T> failure(Throwable error){        return new SelectResult<>(Collections.emptyList(), error.getMessage());    }}
public class Select {
    private static final String connectionString = "jdbc:h2:./data/sample-1.db";
    public static void main(String[] args) {        SelectResult<Task> tasks = select("select * from tbl_task", new Object[]{}, Task.metadata);        if(tasks.error != null){            System.out.println(tasks.error);        }        else {            for (Task task : tasks.result) {                System.out.println(task);            }        }    }
    public static <T extends Entity> SelectResult<T> select(String query, Object[] parameters, EntityMetadata metadata){        try(Connection conn = DriverManager.getConnection(connectionString)){            try(PreparedStatement ps = conn.prepareStatement(query)){                for(int i = 0; i < parameters.length; i++){                    ps.setObject(i + 1, parameters[i]);                }
                Collection<T> collection = new ArrayList<>();                try(ResultSet rs = ps.executeQuery()){                    ResultSetMetaData rsmd = rs.getMetaData();                    while(rs.next()){                        T data  = metadata.entityInstance();                        int numberOfColumns = rsmd.getColumnCount();                        for(int i = 1; i <= numberOfColumns; i++){                            String attributeName = metadata.resolveAttributeName(rsmd.getColumnName(i).toLowerCase());                            Class<?> attributeType = metadata.resolveAttributeType(attributeName);                            data.set(attributeName, rs.getObject(i, attributeType));                        }                        collection.add(data);                    }                    return SelectResult.success(collection);                }                catch(Exception e){                    return SelectResult.failure(e);                }            }            catch(Exception e){                return SelectResult.failure(e);            }        }        catch(Exception e){            return SelectResult.failure(e);        }    }}

Entity with an embedded association#

Visit Github Resources

We have considered the most basic mapping we could, and in doing so, we have come up with a lot of boilerplate we can reuse when considereing other scenarios. To begin with, let's consider a scenario where an entity has an embedded association which is mapped to columns in the same table, for instance:

create table if not exists tbl_customer (  member_id UUID default random_uuid(),  first_name varchar(50) not null,  last_name varchar(50) not null,  date_joined timestamp not null default now(),  member_level varchar default 'SILVER',  addr_city varchar(50),  addr_state varchar(30),  addr_postal_code varchar(10),  constraint customer_pk primary key(member_id));

Using an embeddded entity, this table can be modelled as shown below:

class Customer implements Entity {  UUID id;  String firstName;  String lastName;  Level level;  Address address;  LocalDate dateJoined;}
//and the corresponding Address classclass Address implements Entity {  String city;  String state;  String zipCode;}
//and the Level is an enumenum Level {    SILVER,    PLATINUM,    GOLD,    DIAMOND}

To adequately model this entity, it will require some updates to the existing classees we have so far.

  1. A way to distinguish an embedded entity from and identifiable entity (one with an identity)
public enum EntityType {    IDENTIFIABLE,    EMBEDDABLE,}
  1. Associating an entity type with the EntityMetadata
public abstract class EntityMetadata {
    final String tableName;    final EntityType entityType;    final List<ColumnInfo> columns;
    public EntityMetadata(String tableName, List<ColumnInfo> columns) {        this(tableName, columns, EntityType.IDENTIFIABLE);    }
    public EntityMetadata(String tableName, List<ColumnInfo> columns, EntityType entityType) {        this.tableName = tableName;        this.entityType = entityType;        this.columns = columns;    }
    public abstract <T extends Entity> T entityInstance();
    public String resolveAttributeName(String columnName){        Optional<ColumnInfo> optional = columns.stream()                .filter(info -> info.columnName != null)                .filter(info -> info.columnName.equals(columnName))                .findFirst();        if(optional.isPresent()) {            return optional.get().attributeName;        }        throw new EntityMappingException(String.format("column %s not found in the metadata", columnName));    }
    public Class<?> resolveAttributeType(String attributeName){        Optional<ColumnInfo> optional = columns.stream()                .filter(info -> info.attributeName != null)                .filter(info -> info.attributeName.equals(attributeName))                .findFirst();        if(optional.isPresent()) {            return optional.get().attributeType;        }        throw new EntityMappingException(String.format("attribute %s not found in the metadata", attributeName));    }}
  1. Associating and entity type with ColumnInfo
public class ColumnInfo {
    String columnName;    String attributeName;    Class<?> attributeType;    Boolean isEmbedded;
    public ColumnInfo(String columnName, String attributeName, Class<?> attributeType) {        this(columnName, attributeName, attributeType, false);    }
    public ColumnInfo(String columnName, String attributeName, Class<?> attributeType, Boolean isEmbedded) {        this.columnName = columnName;        this.attributeName = attributeName;        this.attributeType = attributeType;        this.isEmbedded = isEmbedded;    }}

With these adjustments, the entity objects would now be like

public class Address implements Entity {
    static Logger LOG = Logger.getLogger(Address.class.getName());    static EntityMetadata metadata = new EntityMetadata(null, List.of(            new ColumnInfo("addr_city", "city", String.class),            new ColumnInfo("addr_state", "state", String.class),            new ColumnInfo("addr_postal_code", "zipCode", String.class)    ), EntityType.EMBEDDABLE) {        @Override        public Address entityInstance() {            return new Address();        }    };    String city;    String state;    String zipCode;    ...}
//and for the Customer
public class Customer implements Entity {
    static Logger LOG = Logger.getLogger(Customer.class.getName());    static EntityMetadata metadata = new EntityMetadata("tbl_customer", List.of(            new ColumnInfo("member_id", "id", UUID.class),            new ColumnInfo("first_name", "firstName", String.class),            new ColumnInfo("last_name", "lastName", String.class),            new ColumnInfo("member_level", "level", Level.class),            new ColumnInfo(null, "address", Address.class, true),            new ColumnInfo("date_joined", "dateJoined", LocalDate.class)    )) {        @Override        public Customer entityInstance() {            return new Customer();        }    };    UUID id;    String firstName;    String lastName;    Level level;    Address address;    LocalDate dateJoined;    ...

The last piece which will require some refactoring at this stage is the Select class.

public static <T extends Entity> SelectResult<T> select(String query, Object[] parameters, EntityMetadata metadata){    try(Connection conn = DriverManager.getConnection(connectionString)){        try(PreparedStatement ps = conn.prepareStatement(query)){            for(int i = 0; i < parameters.length; i++){                ps.setObject(i + 1, parameters[i]);            }
            Collection<T> collection = new ArrayList<>();            try(ResultSet rs = ps.executeQuery()){                ResultSetMetaData meta = rs.getMetaData();                extractEntities(metadata, collection, rs, meta);                return SelectResult.success(collection);            }            catch(Exception e){                return SelectResult.failure(e);            }        }        catch(Exception e){            return SelectResult.failure(e);        }    }    catch(Exception e){        return SelectResult.failure(e);    }}
//and the extracted method isprivate static <T extends Entity> void extractEntities(EntityMetadata metadata, Collection<T> collection, ResultSet rs, ResultSetMetaData meta) throws SQLException {    while(rs.next()){        T data  = metadata.entityInstance();        int numberOfColumns = meta.getColumnCount();        for(int i = 1; i <= numberOfColumns; i++){            String attributeName = metadata.resolveAttributeName(meta.getColumnName(i).toLowerCase());            Class<?> attributeType = metadata.resolveAttributeType(attributeName);            data.set(attributeName, rs.getObject(i, attributeType));        }        collection.add(data);    }}

If you run the select method at this point without further changes, you will encounter an error like Feature not supported: <package_name>.Level" [50100-200]. This is because Level is an enumeration, which is a custom type. We need to be aware of this and provide additional information in the metadata.

public class ColumnInfo {
    String columnName;    String attributeName;    Class<?> attributeType;    Boolean isEmbedded;    Boolean isEnum;

With that changes, you start to get a feeling that the constructors in ColumnInfo will quickly grow out of control as we continue to add more options. So the best approach I will use here is a builder for the ColumnInfo.

public class ColumnInfo {
    //...properties as show before
    public ColumnInfo(String columnName, String attributeName, Class<?> attributeType, Boolean isEmbedded, Boolean isEnum) {        this.columnName = columnName;        this.attributeName = attributeName;        this.attributeType = attributeType;        this.isEmbedded = isEmbedded;        this.isEnum = isEnum;    }        public static ColumnInfoBuilder builder(){        return new ColumnInfoBuilder();    }} //and the builder
public class ColumnInfoBuilder {        private String columnName;    private String attributeName;    private Class<?> attributeType = String.class;    private Boolean isEmbedded = false;    private Boolean isEnum = false;        public ColumnInfoBuilder columnName(String columnName){        this.columnName = columnName;        return this;    }
    public ColumnInfoBuilder attributeName(String attributeName){        this.attributeName = attributeName;        return this;    }
    public ColumnInfoBuilder attributeType(Class<?> attributeType){        this.attributeType = attributeType;        return this;    }
    public ColumnInfoBuilder isEmbedded(Boolean isEmbedded){        this.isEmbedded = isEmbedded;        return this;    }
    public ColumnInfoBuilder isEnum(Boolean isEnum){        this.isEnum = isEnum;        return this;    }        public ColumnInfo build(){        return new ColumnInfo(columnName, attributeName, attributeType, isEmbedded, isEnum);    }}

Now we need to make adjustments to the places where new ColumnInfo(...) was previously used, like shown below

//Address metadataColumnInfo.builder().columnName("addr_city").attributeName("city").build(),ColumnInfo.builder().columnName("addr_state").attributeName("state").build(),ColumnInfo.builder().columnName("addr_postal_code").attributeName("zipCode").build()
//Customer metadataColumnInfo.builder().columnName("member_id").attributeName("id").attributeType(UUID.class).build(),ColumnInfo.builder().columnName("first_name").attributeName("firstName").build(),ColumnInfo.builder().columnName("last_name").attributeName("lastName").build(),ColumnInfo.builder().columnName("member_level").attributeName("level").attributeType(Level.class).isEnum(true).build(),ColumnInfo.builder().attributeName("address").attributeType(Address.class).isEmbedded(true).build(),ColumnInfo.builder().columnName("date_joined").attributeName("dateJoined").attributeType(LocalDate.class).build()

Now we can refactor the extractEntities function to accomodate enum types

private static <T extends Entity> void extractEntities(EntityMetadata metadata, Collection<T> collection, ResultSet rs, ResultSetMetaData meta) throws SQLException {    while(rs.next()){        T data  = metadata.entityInstance();        int numberOfColumns = meta.getColumnCount();        for(int i = 1; i <= numberOfColumns; i++){            String attributeName = metadata.resolveAttributeName(meta.getColumnName(i).toLowerCase());            Optional<ColumnInfo> columnInfoOption = metadata.resolveByColumnName(attributeName);            if(columnInfoOption.isPresent()) {                ColumnInfo columnInfo = columnInfoOption.get();                Class<?> attributeType = columnInfo.isEnum ? String.class : columnInfo.attributeType;                data.set(attributeName, rs.getObject(i, attributeType));            }        }        collection.add(data);    }}
//And also update set(...) method in Customer
case "level":  this.level = Level.valueOf((String)value); //This change is REQUIRED for Enums to work seamlessly  break;

At this point, if we run the select method again, the enum issue is resolved, However, the embedded address value is still not populated, so it's time to make this work. To do this, we need to know the Entity type which is Embedded. We can have a registry of Entity types from which we can resolve a particular entity class. For this, let's add some kind of metadata registry. Move the self-registration of EntityMetadata from the Entity classes to this place as well (this design might change later on as the discussion evolves)

public class EntityRegistry {
    public static final Map<Class<? extends Entity>, EntityMetadata> registry = new HashMap<>();
    static {        //Address Entity        registry.put(Address.class, new EntityMetadata(Address.class, null, List.of(                ColumnInfo.builder().columnName("addr_city").attributeName("city").build(),                ColumnInfo.builder().columnName("addr_state").attributeName("state").build(),                ColumnInfo.builder().columnName("addr_postal_code").attributeName("zipCode").build()        ), EntityType.EMBEDDABLE) {
            @Override            public Address entityInstance() {                return new Address();            }        });
        //Customer Entity        registry.put(Customer.class, new EntityMetadata(Customer.class, "tbl_customer", List.of(                ColumnInfo.builder().columnName("member_id").attributeName("id").attributeType(UUID.class).build(),                ColumnInfo.builder().columnName("first_name").attributeName("firstName").build(),                ColumnInfo.builder().columnName("last_name").attributeName("lastName").build(),                ColumnInfo.builder().columnName("member_level").attributeName("level").attributeType(Level.class).isEnum(true).build(),                ColumnInfo.builder().attributeName("address").attributeType(Address.class).isEmbedded(true).build(),                ColumnInfo.builder().columnName("date_joined").attributeName("dateJoined").attributeType(LocalDate.class).build()        )) {
            @Override            public Customer entityInstance() {                return new Customer();            }        });    }}
// refactor the EntityMetadata constructors to capture this informationpublic EntityMetadata(Class<? extends Entity> entityClass, String tableName, List<ColumnInfo> columns) {    this(entityClass, tableName, columns, EntityType.IDENTIFIABLE);}
public EntityMetadata(Class<? extends Entity> entityClass, String tableName, List<ColumnInfo> columns, EntityType entityType) {    this.tableName = tableName;    this.entityType = entityType;    this.columns = columns;    EntityRegistry.registry.put(entityClass, this);}

Now we have sufficient information to refactor the extractEntities function. I have broken the extractEntities into two so that I can make use of recursion to extract row data into nested properties

private static <T extends Entity> void extractEntities(ResultSet rs, ResultSetMetaData meta, EntityMetadata metadata, Collection<T> collection) throws SQLException {    while(rs.next()){        T data  = extractEntity(rs, meta, metadata);        collection.add(data);    }}
private static <T extends Entity> T extractEntity(ResultSet rs, ResultSetMetaData meta, EntityMetadata metadata) throws SQLException {    T data = metadata.entityInstance();    int numberOfColumns = meta.getColumnCount();    //check mapped columns    for(int i = 1; i <= numberOfColumns; i++){        String columnName = meta.getColumnName(i).toLowerCase(); //important since the name might be in upper case        Optional<ColumnInfo> columnInfoOption = metadata.resolveByColumnName(columnName);        if(columnInfoOption.isPresent()) {            ColumnInfo columnInfo = columnInfoOption.get();            String attributeName = columnInfo.attributeName;            Class<?> attributeType = columnInfo.isEnum ? String.class : columnInfo.attributeType;            data.set(attributeName, rs.getObject(i, attributeType));        }    }
    //check for Embedded columns    if(metadata.containsEmbedded()){        for(ColumnInfo embeddedColumnInfo : metadata.embeddedColumns()){            String attributeName = embeddedColumnInfo.attributeName;            EntityMetadata embeddedEntityMetadata = EntityRegistry.registry.get(embeddedColumnInfo.attributeType);            Object embeddedValue = extractEntity(rs, meta, embeddedEntityMetadata);            data.set(attributeName, embeddedValue);        }    }    return data;}

Now when you execute the select function, all the data expected is retrieved successfully.

Table with a composite primary key#

Visit Github Resources

While we are still evaluating scenarios with no relationships, another possible variation is one where the table has a a primary key composed of two or more columns, instead of just a single column. Let's assume a user's primary key is the combination of their username and email address

create table if not exists tbl_user (  email_address varchar(100) not null,  username varchar(50) not null,  nickname varchar(50),  date_joined timestamp not null default now(),  access_level varchar default 'VISITOR',  addr_city varchar(50),  addr_state varchar(30),  addr_postal_code varchar(10),  constraint customer_pk primary key(email_address, username));

This can be modeled as an object in this way

public class User implements Entity {
    static Logger LOG = Logger.getLogger(User.class.getName());
    UserId userId;    String nickName;    AccessLevel accessLevel;    Address address;    LocalDate dateJoined;
    @Override    public EntityMetadata metadata() {        return EntityRegistry.registry.get(User.class);    }
    @Override    public <T> void set(String attribute, T value) {        switch (attribute) {            case "userId":                this.userId = (UserId) value;                break;            case "nickName":                this.nickName = (String) value;                break;            case "accessLevel":                this.accessLevel = AccessLevel.valueOf((String)value);                break;            case "address":                this.address = (Address) value;                break;            case "dateJoined":                this.dateJoined = (LocalDate) value;                break;            default:                LOG.warning("what the heck are you doing?");        }    }}
// and the user Id would bepublic class UserId {    String emailAddress;    String username;}

A plausible strategy here would be to have the UserId class implement the Entity interface, and then have the ColumnInfo capture this new type of metadata

public class UserId implements Entity{
    static Logger LOG = Logger.getLogger(User.class.getName());        String emailAddress;    String username;
    @Override    public EntityMetadata metadata() {        return EntityRegistry.registry.get(Address.class);    }
    @Override    public <T> void set(String attribute, T value) {        switch (attribute) {            case "emailAddress":                this.emailAddress = (String) value;                break;            case "username":                this.username = (String) value;                break;            default:                LOG.warning("what the heck are you doing?");        }    }}

The EntityType should have an additional entry to capture Composite_Pk type

public enum EntityType {    IDENTIFIABLE,    EMBEDDABLE,    COMPOSITE_PK}

The ColumnInfo should also capture this new type information. Update the ColumnINfoBuilder as well

Boolean isCompositePk;

In the EntityRegistry, add new entries for the UserId and User entities

//UserId Entityregistry.put(UserId.class, new EntityMetadata(UserId.class, null, List.of(        ColumnInfo.builder().columnName("email_address").attributeName("emailAddress").build(),        ColumnInfo.builder().columnName("username").attributeName("userName").build()), EntityType.COMPOSITE_PK) {
    @Override    public UserId entityInstance() {        return new UserId();    }});
//User Entityregistry.put(User.class, new EntityMetadata(User.class, "tbl_user", List.of(        ColumnInfo.builder().attributeName("userId").attributeType(UserId.class).isCompositePk(true).build(),        ColumnInfo.builder().columnName("nickname").attributeName("nickName").build(),        ColumnInfo.builder().columnName("access_level").attributeName("accessLevel").attributeType(AccessLevel.class).isEnum(true).build(),        ColumnInfo.builder().attributeName("address").attributeType(Address.class).isEmbedded(true).build(),        ColumnInfo.builder().columnName("date_joined").attributeName("dateJoined").attributeType(LocalDate.class).build())) {
    @Override    public User entityInstance() {        return new User();    }});

At this point, if you execute the select function, you will get a valid result, but the User entities with be missing the userId value, since it's not yet mapped. Since all the plumbing necessary is in place, let's now refactor the select function

private static <T extends Entity> T extractEntity(ResultSet rs, ResultSetMetaData meta, EntityMetadata metadata) throws SQLException {  T data = metadata.entityInstance();  int numberOfColumns = meta.getColumnCount();  //check mapped columns  for(int i = 1; i <= numberOfColumns; i++){      String columnName = meta.getColumnName(i).toLowerCase(); //important since the name might be in upper case      Optional<ColumnInfo> columnInfoOption = metadata.resolveByColumnName(columnName);      if(columnInfoOption.isPresent()) {          ColumnInfo columnInfo = columnInfoOption.get();          String attributeName = columnInfo.attributeName;          Class<?> attributeType = columnInfo.isEnum ? String.class : columnInfo.attributeType;          data.set(attributeName, rs.getObject(i, attributeType));      }  }  //check for Embedded columns  if(metadata.containsEmbedded()){      for(ColumnInfo embeddedColumnInfo : metadata.embeddedColumns()){          String attributeName = embeddedColumnInfo.attributeName;          EntityMetadata embeddedEntityMetadata = EntityRegistry.registry.get(embeddedColumnInfo.attributeType);          Object embeddedValue = extractEntity(rs, meta, embeddedEntityMetadata);          data.set(attributeName, embeddedValue);      }  }  //check for CompositePk columns  if(metadata.containsCompositePk()){      ColumnInfo compositePkColumnInfo = metadata.compositePkColumn(); //can only have one      String attributeName = compositePkColumnInfo.attributeName;      EntityMetadata compositePkEntityMetadata = EntityRegistry.registry.get(compositePkColumnInfo.attributeType);      Object embeddedValue = extractEntity(rs, meta, compositePkEntityMetadata);      data.set(attributeName, embeddedValue);  }
  return data;}

By adding a check for composite key columns, the columns are correctly mapped, and all the data as expected is in the User object.

Table with 1-to-1 relationship#

Visit Github Resources

We have considered mapping with no relationships so far, and it has been soooth-sailing. Let's go another step further and consider a scenario where the entity has a 1-1 relationship with another entity, for instance:

create table if not exists tbl_task (  id UUID default random_uuid(),  name varchar(50) not null,  done boolean not null default false,  task_created timestamp not null default now(),  parent_task UUID,  constraint task_pk primary key(id),  constraint uniq_name unique (name),  constraint task_parent_fk foreign key(parent_task) references tbl_task(id));

This kind of relationship can be modeled as shown below. We could constrain parent_task to be unique to make the relationship truly 1-1, but that's a matter of design.

public class Task implements Entity {
    static Logger LOG = Logger.getLogger(Task.class.getName());
    UUID id;    String name;    Boolean completed;    LocalDate dateCreated;    Task parentTask;
    @Override    public EntityMetadata metadata() {        return EntityRegistry.registry.get(Task.class);    }
    @Override    public <T> void set(String attribute, T value) {        switch (attribute) {            case "id":                this.id = (UUID) value;                break;            case "name":                this.name = (String) value;                break;            case "completed":                this.completed = (Boolean) value;                break;            case "dateCreated":                this.dateCreated = (LocalDate) value;                break;            case "parentTask":                this.parentTask = (Task) value;                break;            default:                LOG.warning("what the heck are you doing?");        }    }}}

With this seemingly small change, the implications are nonetheless significant. There is a new column which is not just another data column, but it's one that can be used to join with other tables, and it will have a larger impact the select operation than the other scenarios we have previously considered.

The first place we need to capture this new metadata information is in the ColumnInfo. Make the corresponding change in ColumnInfoBuilder as well

Boolean isPkColumn;Boolean isFkColumn;

As a result of the changes above, the EntityRegistry will look as shown here:

public class EntityRegistry {
    public static final Map<Class<? extends Entity>, EntityMetadata> registry = new HashMap<>();
    static {        //Task Entity        registry.put(Task.class, new EntityMetadata(Task.class, "tbl_task", List.of(                ColumnInfo.builder().columnName("id").attributeName("id").attributeType(UUID.class).isPkColumn(true).build(),                ColumnInfo.builder().columnName("name").attributeName("name").build(),                ColumnInfo.builder().columnName("done").attributeName("completed").attributeType(Boolean.class).build(),                ColumnInfo.builder().columnName("task_created").attributeName("dateCreated").attributeType(LocalDate.class).build(),                ColumnInfo.builder().columnName("parent_task").attributeName("parentTask").attributeType(Task.class).isFKColumn(true).build()        )) {
            @Override            public Task entityInstance() {                return new Task();            }        });    }}

At this point, without mofification to the select function, you will get a valid response, but the parentTask value will be missing (null). This will require some changes to the select function. But first, as we have done previously, let's make some additions to the EntityMetadata that will be used in the select function

public ColumnInfo pkColumn() {    return columns.stream().filter(info -> info.isPkColumn).findFirst().orElseThrow();}
public Boolean containsFkColumns(){    return columns.stream().anyMatch(info -> info.isFkColumn);}
public List<ColumnInfo> fkColumns(){    return columns.stream().filter(info -> info.isFkColumn).collect(Collectors.toList());}

And finally the select function can start taking shape. Since this required fetching data from a new ResultSet and not the current one (like has been the case so far), let's refactor the select function to seperate aquiring the Connection from using the connection. In this maner, we can re-use the same connection to fetch a new ResultSet instead of creating a new connection (which would be a terrible use of limited resources).

public static <T extends Entity> SelectResult<T> select(String query, Object[] parameters, EntityMetadata metadata) {    try (Connection conn = DriverManager.getConnection(connectionString)) {        return select(query, parameters, metadata, conn);    } catch (Exception e) {        return SelectResult.failure(e);    }}
private static <T extends Entity> SelectResult<T> select(String query, Object[] parameters, EntityMetadata metadata, Connection conn) {    try (PreparedStatement ps = conn.prepareStatement(query)) {        for (int i = 0; i < parameters.length; i++) {            ps.setObject(i + 1, parameters[i]);        }                try (ResultSet rs = ps.executeQuery()) {            ResultSetMetaData meta = rs.getMetaData();            Collection<T> collection = extractEntities(rs, meta, metadata, conn);            return SelectResult.success(collection);        } catch (Exception e) {            return SelectResult.failure(e);        }    } catch (Exception e) {        return SelectResult.failure(e);    }}
private static <T extends Entity> Collection<T> extractEntities(ResultSet rs, ResultSetMetaData meta, EntityMetadata metadata, Connection conn) throws SQLException {    Collection<T> collection = new ArrayList<>();    while (rs.next()) {        T data = extractEntity(rs, meta, metadata, conn);        collection.add(data);    }    return collection;}
private static <T extends Entity> T extractEntity(ResultSet rs, ResultSetMetaData meta, EntityMetadata metadata, Connection conn) throws SQLException {  //add a check to skip fkColumns, since these should be treated diferently  //check mapped columns  for (int i = 1; i <= numberOfColumns; i++) {      String columnName = meta.getColumnName(i).toLowerCase(); //important since the name might be in upper case      Optional<ColumnInfo> columnInfoOption = metadata.resolveByColumnName(columnName);      if (columnInfoOption.isPresent()) {          ColumnInfo columnInfo = columnInfoOption.get();          if (!columnInfo.isFkColumn) {              String attributeName = columnInfo.attributeName;              Class<?> attributeType = columnInfo.isEnum ? String.class : columnInfo.attributeType;              data.set(attributeName, rs.getObject(i, attributeType));          }      }  }  //...  //add a check for fkColumns  //check for foreign key columns (with join with single pk column)  if (metadata.containsFkColumns()) {      for (ColumnInfo fkColumnInfo : metadata.fkColumns()) {          String attributeName = fkColumnInfo.attributeName;          EntityMetadata joinedEntityMetadata = EntityRegistry.registry.get(fkColumnInfo.attributeType);          String query = joinedEntityMetadata.createJoinQuery(metadata.tableName, fkColumnInfo.columnName);          Object joinValue = rs.getObject(fkColumnInfo.columnName, joinedEntityMetadata.pkColumn().attributeType);          if (joinValue != null) {              SelectResult<?> fkEntityValue = select(query, new Object[]{joinValue}, joinedEntityMetadata, conn);              if (fkEntityValue.result != null && fkEntityValue.result.size() > 0) {                  data.set(attributeName, new ArrayList<>(fkEntityValue.result).get(0));              }          }      }  }}

Modify the extractEntity to accept a connection argument, and pass this connection along as well.

Last but not least, we need that query generator function for this scenario. This logic belongs to the EntityMetadata class

public String createJoinQuery(String fkColumnTable, String fkColumnName) {    ColumnInfo pkColumnInfo = pkColumn();    String[] query = new String[]{            "select PK_TABLE.* from " + tableName + " PK_TABLE",            "inner join " + fkColumnTable + " FK_TABLE on FK_TABLE." + fkColumnName + " = PK_TABLE." + pkColumnInfo.columnName,            "where FK_TABLE." + fkColumnName + " = ?"    };    return String.join(" ", query);}

At this point, when you fire up the select function, all looks good but this can quikly get out of control under various contitions

  1. The nested select may become a cyclic loop and overflow the memory stack
  2. The primary key might be a composite key

Querying the table below will quickly illustrate the first problem

IDNAMEDONETASK_CREATEDPARENT_TASK
28279a37-...Wake upfalse2021-08-11...
d4488055-...Exercisefalse2021-08-11...28279a37-...
43b436e8-...Showerfalse2021-08-11...d4488055-...
e5a0bf46-...Task 1false2021-08-12...a3910660-...
66bb54c7-...Task 2false2021-08-12...e0e2cf8b-...
dda33caa-...Task 3false2021-08-12...
9b39a950-...Task 4false2021-08-12...7b0041a9-...
7b0041a9-...Task 5false2021-08-12...9b39a950-...
19cc698c-...Task 6false2021-08-12...
e0e2cf8b-...Task 7false2021-08-12...dda33caa-...
67b63202-...Task 8false2021-08-12...66bb54c7-...
a3910660-...Task 9false2021-08-12...

To resolve this issue, we'll need to pass an additional argument to the select function to cache the results for the duration of the query. The important thing here is to make sure that the primary key implements the Comparable interface, since we are using a TreeMap in order to compare the actual ID value

//create local cachepublic class LocalCache<VAL> extends TreeMap<Comparable<?>, VAL> {
    public void addIfNotExists(Comparable<?> key, VAL value){        if(!containsKey(key)){            put(key, value);        }    }
    public Optional<VAL> getIfExists(Comparable<?> key){        return Optional.ofNullable(get(key));    }}
//use local cache in select functionpublic static <T extends Entity> SelectResult<T> select(String query, Object[] parameters, EntityMetadata metadata) {    try (Connection conn = DriverManager.getConnection(connectionString)) {        LocalCache<T> cache = new LocalCache<>();        SelectResult<T> result = select(query, parameters, metadata, conn, cache);        cache.clear();        return result;    } catch (Exception e) {        return SelectResult.failure(e);    }}
//update other functions to accept the cache argumentprivate static <T extends Entity> SelectResult<T> select(String query, Object[] parameters, EntityMetadata metadata, Connection conn, LocalCache<T> cache) {  ...}
private static <T extends Entity> Collection<T> extractEntities(ResultSet rs, EntityMetadata metadata, Connection conn, LocalCache<T> cache) throws SQLException {  ...}
private static <T extends Entity> T extractEntity(ResultSet rs, ResultSetMetaData meta, EntityMetadata metadata, Connection conn, LocalCache<T> cache) throws SQLException {    T data = metadata.entityInstance();    int numberOfColumns = meta.getColumnCount();    //check primary key column    ColumnInfo pkColumnInfo = metadata.pkColumn();    if(pkColumnInfo.isPkColumn){        Object id = rs.getObject(pkColumnInfo.columnName);        Optional<T> cached = cache.getIfExists((Comparable<?>) id);        if(cached.isPresent()){            return cached.get();        }        else{            cache.addIfNotExists((Comparable<?>) id, data);        }    }
    //check mapped columns    ...}

Now when you query this table, you will get the 12 records back without overflowing the stack.

Moving on to the second challenge, which is when the primary key is a composite key.

create table if not exists tbl_task_v2(    num int NOT null,    name varchar(50) not null,    done boolean not null default false,    task_created timestamp not null default now(),    parent_task_num int,    parent_task_name varchar(50),    constraint task_pk_v2 primary key(num, name),    constraint task_parent_fk_v2 foreign key(parent_task_num, parent_task_name) references tbl_task_v2(num, name));
# some queries to try outINSERT INTO TBL_TASK_V2 (num, name) values(0,'Task 0');INSERT INTO TBL_TASK_V2 (num, name) values(1,'Task 1');INSERT INTO TBL_TASK_V2 (num, name) values(1,'Task 2');INSERT INTO TBL_TASK_V2 (num, name) values(3,'Task 3');INSERT INTO TBL_TASK_V2 (num, name) values(2,'Task 4');INSERT INTO TBL_TASK_V2 (num, name) values(2,'Task 5');INSERT INTO TBL_TASK_V2 (num, name) values(3,'Task 6');INSERT INTO TBL_TASK_V2 (num, name) values(4,'Task 7');INSERT INTO TBL_TASK_V2 (num, name) values(3,'Task 8');INSERT INTO TBL_TASK_V2 (num, name) values(4,'Task 9');
UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 3, PARENT_TASK_NAME = 'Task 8' WHERE NUM = 4 AND NAME = 'Task 9';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 2, PARENT_TASK_NAME = 'Task 5' WHERE NUM = 3 AND NAME = 'Task 8';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 2, PARENT_TASK_NAME = 'Task 4' WHERE NUM = 4 AND NAME = 'Task 7';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 1, PARENT_TASK_NAME = 'Task 2' WHERE NUM = 2 AND NAME = 'Task 5';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 3, PARENT_TASK_NAME = 'Task 6' WHERE NUM = 3 AND NAME = 'Task 3';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 1, PARENT_TASK_NAME = 'Task 1' WHERE NUM = 1 AND NAME = 'Task 2';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 0, PARENT_TASK_NAME = 'Task 0' WHERE NUM = 2 AND NAME = 'Task 5';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 0, PARENT_TASK_NAME = 'Task 0' WHERE NUM = 1 AND NAME = 'Task 1';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 3, PARENT_TASK_NAME = 'Task 3' WHERE NUM = 3 AND NAME = 'Task 4';UPDATE TBL_TASK_V2 SET PARENT_TASK_NUM = 1, PARENT_TASK_NAME = 'Task 2' WHERE NUM = 3 AND NAME = 'Task 6';

Let's model this using the class below

//The taskId composite keyclass TaskId implements Entity, Comparable<TaskId> {
    static Logger LOG = Logger.getLogger(TaskId.class.getName());
    Integer number;    String name;
    @Override    public EntityMetadata metadata() {        return EntityRegistry.registry.get(TaskId.class);    }
    @Override    public <T> void set(String attribute, T value) {        switch (attribute) {            case "number":                this.number = (Integer) value;                break;            case "name":                this.name = (String) value;                break;            default:                LOG.warning("what the heck are you doing?");        }    }
    @Override    public int compareTo(TaskId o) {        if (this == o) return 0;        int comparison = this.name.compareTo(o.name);        if (comparison != 0) return comparison;        return this.number.compareTo(o.number);    }}
//the taskV2 entityclass TaskV2 implements Entity {
    static Logger LOG = Logger.getLogger(TaskV2.class.getName());
    TaskId taskId;    Boolean completed;    LocalDate dateCreated;    TaskV2 parentTask;
    @Override    public EntityMetadata metadata() {        return EntityRegistry.registry.get(TaskV2.class);    }
    @Override    public <T> void set(String attribute, T value) {        switch (attribute) {            case "taskId":                this.taskId = (TaskId) value;                break;            case "completed":                this.completed = (Boolean) value;                break;            case "dateCreated":                this.dateCreated = (LocalDate) value;                break;            case "parentTask":                this.parentTask = (TaskV2) value;                break;            default:                LOG.warning("what the heck are you doing?");        }    }}

With these new Entity classes, the ColumnInfo needs a new field to capture information about the cmposite columns. Update the ColumnInfoBuilder accordingly as well.

//add to ColumnInfoString[][] compositeColumns;

The EntityRegistry needs to be updated as well to capture the new metadata information

//TaskId Entityregistry.put(TaskId.class, new EntityMetadata(TaskId.class, null, List.of(        ColumnInfo.builder().columnName("num").attributeName("number").attributeType(Integer.class).build(),        ColumnInfo.builder().columnName("name").attributeName("name").build()), EntityType.COMPOSITE_PK) {
    @Override    public TaskId entityInstance() {        return new TaskId();    }});
//TaskV2 Entityregistry.put(TaskV2.class, new EntityMetadata(TaskV2.class, "tbl_task_v2", List.of(        ColumnInfo.builder().attributeName("taskId").attributeType(TaskId.class).isCompositePk(true).build(),        ColumnInfo.builder().columnName("done").attributeName("completed").attributeType(Boolean.class).build(),        ColumnInfo.builder().columnName("task_created").attributeName("dateCreated").attributeType(LocalDate.class).build(),        ColumnInfo.builder().attributeName("parentTask").attributeType(TaskV2.class).isFKColumn(true)            .compositeColumns(new String[][]{{"parent_task_num", "num"}, {"parent_task_name", "name"}}).build())) {
    @Override    public TaskV2 entityInstance() {        return new TaskV2();    }});

Update the EntityMetadata with a new query generator for composite column relations

public String createCompositeJoinQuery(String fkColumnTable, String[][] compositeColumns) {    String joinOn = Stream.of(compositeColumns).map(fkColumnNames -> "" +            "FK_TABLE." + fkColumnNames[0] + " = PK_TABLE." + fkColumnNames[1]).collect(Collectors.joining(" and "));    String joinWhere = Stream.of(compositeColumns).map(fkColumnNames -> "" +            "FK_TABLE." + fkColumnNames[0] + " = ?").collect(Collectors.joining(" and "));    String[] query = new String[]{            "select PK_TABLE.* from " + tableName + " PK_TABLE",            "inner join " + fkColumnTable + " FK_TABLE on " + joinOn,            "where " + joinWhere    };
    return String.join(" ", query);}

Then last and definately not last, update the select function. This will take seeral changes:

//check if entity has composite pkif (metadata.containsCompositePk()) {    ColumnInfo compositePkColumnInfo = metadata.compositePkColumn();    EntityMetadata compositePkEntityMetadata = EntityRegistry.registry.get(compositePkColumnInfo.attributeType);    Object compositeId = extractEntity(rs, meta, compositePkEntityMetadata, conn, cache);    Optional<T> cached = cache.getIfExists((Comparable<?>) compositeId);    if (cached.isPresent()) {        return cached.get();    } else {        cache.addIfNotExists((Comparable<?>) compositeId, data);        //set attribute value in entity        String attributeName = compositePkColumnInfo.attributeName;        data.set(attributeName, compositeId);    }}
//check if composite has non-composite pkif (metadata.containsPkColumn()) {    ColumnInfo pkColumnInfo = metadata.pkColumn();    Object id = rs.getObject(pkColumnInfo.columnName, pkColumnInfo.attributeType);    Optional<T> cached = cache.getIfExists((Comparable<?>) id);    if (cached.isPresent()) {        return cached.get();    } else {        cache.addIfNotExists((Comparable<?>) id, data);    }}
//check for foreign key columns (with join having either single fk column or composite fk columns)if (metadata.containsFkColumns()) {    for (ColumnInfo fkColumnInfo : metadata.fkColumns()) {        String attributeName = fkColumnInfo.attributeName;        EntityMetadata joinedEntityMetadata = EntityRegistry.registry.get(fkColumnInfo.attributeType);        if (fkColumnInfo.columnName != null) {            String query = joinedEntityMetadata.createJoinQuery(metadata.tableName, fkColumnInfo.columnName);            Object joinValue = rs.getObject(fkColumnInfo.columnName, joinedEntityMetadata.pkColumn().attributeType);            if (joinValue != null) {                SelectResult<?> fkEntityValue = select(query, new Object[]{joinValue}, joinedEntityMetadata, conn, cache);                if (fkEntityValue.result != null && fkEntityValue.result.size() > 0) {                    data.set(attributeName, new ArrayList<>(fkEntityValue.result).get(0));                }            }        } else {            //must be a composite fk column            String[][] compositeColumns = fkColumnInfo.compositeColumns;            String query = joinedEntityMetadata.createCompositeJoinQuery(metadata.tableName, compositeColumns);            Object[] params = new Object[compositeColumns.length];            for (int i = 0; i < params.length; i++) {                String column = compositeColumns[i][0];                params[i] = rs.getObject(column);            }            SelectResult<?> compositeFkEntityValue = select(query, params, joinedEntityMetadata, conn, cache);            if (compositeFkEntityValue.result != null && compositeFkEntityValue.result.size() > 0) {                data.set(attributeName, new ArrayList<>(compositeFkEntityValue.result).get(0));            }        }    }}

With these updates, running the select will now produce the expected results.

Table with many-to-1 relationship#

Visit Github Resources

The last section was quite lengthy, but it also covered a lot of ground. The next step will be to consider a scenario where there is a many-to-1 relation.

create table if not exists tbl_task (  id UUID default random_uuid(),  name varchar(50) not null,  done boolean not null default false,  task_created timestamp not null default now(),  parent_task UUID,  constraint task_pk primary key(id),  constraint uniq_name unique (name),  constraint task_parent_fk foreign key(parent_task) references tbl_task(id));

The sql table will remain the same, and since there is no unique constraint on the parent_task, this can then be modelled as a many-to-1 relationship using a Collection attribute the the object realm. Let's also consider that the primary key is non-composite

public class Task implements Entity {
    static Logger LOG = Logger.getLogger(Task.class.getName());
    UUID id;    String name;    Boolean completed;    LocalDate dateCreated;    Task parentTask;    Collection<Task> dependsOn;
    @Override    public EntityMetadata metadata() {        return EntityRegistry.registry.get(Task.class);    }
    @Override    public <T> void set(String attribute, T value) {        switch (attribute) {            //...same as before            case "dependsOn":                this.dependsOn = (Collection<Task>) value;                break;            default:                LOG.warning("what the heck are you doing?");        }    }}

For a given Task the dependsOn field will group together all other Tasks having their parent_task as its id. We need to define a new attribute in ColumnInfo and also update the ColumnInfoBuilder accordingly

Boolean isCollection;

The EntityMetadata should be able to reflect this change through adding some convenient methods:

public Boolean containsCollectionColumns() {    return columns.stream().anyMatch(info -> info.isCollection != null && info.isCollection);}
public List<ColumnInfo> collectionColumns() {    return columns.stream().filter(info -> info.isCollection).collect(Collectors.toList());}

The metadata for the Task needs to be updated to reflect this new scenario

//same as beforeColumnInfo.builder().columnName("parent_task").attributeName("dependsOn").attributeType(Task.class).isCollection(true).build()

Now the only other change remaining is to update the select function to handle this scenario. The update is a mirror image of the how we handled foreign keys except when setting the result back in the parent entity, so I choose to use a seperate code block and no muddle the two concernes

//check for Collection columnsif(metadata.containsCollectionColumns()){    for (ColumnInfo fkColumnInfo : metadata.collectionColumns()) {        String attributeName = fkColumnInfo.attributeName;        EntityMetadata joinedEntityMetadata = EntityRegistry.registry.get(fkColumnInfo.attributeType);        if (fkColumnInfo.columnName != null) {            String query = joinedEntityMetadata.createJoinQuery(metadata.tableName, fkColumnInfo.columnName);            Object joinValue = rs.getObject(fkColumnInfo.columnName, joinedEntityMetadata.pkColumn().attributeType);            if (joinValue != null) {                SelectResult<?> fkEntityValue = select(query, new Object[]{joinValue}, joinedEntityMetadata, conn, cache);                if (fkEntityValue.result != null) {                    data.set(attributeName, fkEntityValue.result);                }            }        }    }}

With that change, the select function continues to return the expected results.

Moving another step forward, as with the previous 1-1 scenario, we need to consider a many-to-1 scenario where the primary key is composite.

create table if not exists tbl_task_v2(    num int NOT null,    name varchar(50) not null,    done boolean not null default false,    task_created timestamp not null default now(),    parent_task_num int,    parent_task_name varchar(50),    constraint task_pk_v2 primary key(num, name),    constraint task_parent_fk_v2 foreign key(parent_task_num, parent_task_name) references tbl_task_v2(num, name));

Again, the database table does not change, and since there is no unique constraint on parent_task, this can be modelled using a Collection attribute in the objects realm.

public class TaskV2 implements Entity {
    static Logger LOG = Logger.getLogger(TaskV2.class.getName());
    TaskId taskId;    Boolean completed;    LocalDate dateCreated;    TaskV2 parentTask;    Collection<TaskV2> dependsOn;
    @Override    public EntityMetadata metadata() {        return EntityRegistry.registry.get(TaskV2.class);    }
    @Override    public <T> void set(String attribute, T value) {        switch (attribute) {            //... same as before            case "dependsOn":                this.dependsOn = (Collection<TaskV2>) value;                break;            default:                LOG.warning("what the heck are you doing?");        }    }}

The TaskV2 entity in EntityRegistry needs an update as well

ColumnInfo.builder().attributeName("dependsOn").attributeType(TaskV2.class).isCollection(true)                        .compositeColumns(new String[][]{{"parent_task_num", "num"}, {"parent_task_name", "name"}}).build()

We finally need to update the select function to handle this scenario as well. In the block to check for Collection columns, we need to add a else to handle composite primary keys

//check for Collection columnsif(metadata.containsCollectionColumns()){    for (ColumnInfo fkColumnInfo : metadata.collectionColumns()) {       //...same as before        else{            //must be a composite fk column            String[][] compositeColumns = fkColumnInfo.compositeColumns;            String query = joinedEntityMetadata.createCompositeJoinQuery(metadata.tableName, compositeColumns);            Object[] params = new Object[compositeColumns.length];            for (int i = 0; i < params.length; i++) {                String column = compositeColumns[i][0];                params[i] = rs.getObject(column);            }            SelectResult<?> compositeFkEntityValue = select(query, params, joinedEntityMetadata, conn, cache);            if (compositeFkEntityValue.result != null) {                data.set(attributeName, compositeFkEntityValue.result);            }        }    }}

And now once again, executing the select function returns the expected results.

Mapping relationship using adjuscent table#

So far, in my treatment of 1-to-1 and many-to-1 relationships, I have assumed the foreign key table and the referenced table to be the same table. There are other cases however when the foreign key table references a different table, and this too can be used to model both 1-to-1 and many-to-1 relationships. Typically, modelling a 1-to-1 relationship in objects is not distinguishable from a many-to-1 relationship besides one being a collection, but in the relational model, this difference is established using a unique constraint on the referenced column.

non-composite pk#

Let's consider a User and a Task, having a 1-to-1 relationship

create table if not exists tbl_user (    id UUID default random_uuid(),    email_address varchar(100) not null,    nickname varchar(50),    date_joined timestamp not null default now(),    access_level varchar default 'VISITOR',    addr_city varchar(50),    addr_state varchar(30),    addr_postal_code varchar(10),    constraint user_pk primary key(id));
create table if not exists tbl_task (    id UUID default random_uuid(),    name varchar(50) not null,    done boolean not null default false,    task_created timestamp not null default now(),    parent_task UUID,    task_assignee UUID,    constraint task_pk primary key(id),    constraint uniq_name unique (name),    constraint task_parent_fk foreign key(parent_task) references tbl_task(id),    constraint task_assignee_fk foreign key(task_assignee) references tbl_user(id));

The respective class representations will look like this:

#the Task entitypublic class Task implements Entity {    UUID id;    String name;    Boolean completed;    LocalDate dateCreated;    Task parentTask;    Collection<Task> dependsOn;    //...the rest is similar to previous examples
#the User entitypublic class User implements Entity {    UUID id;    String emailAddress;    String nickName;    AccessLevel accessLevel;    Address address;    LocalDate dateJoined;    Collection<Task> assignments;
    //the rest similar to earlier examples

To model assignments being in a different table, we need to introduce another attribute to the ColumnInfo class, and the associated change necessary on the ColumnInfoBuilder class

String inverseFkColumn;

This new attribute needs to be added to the User entity's metadata

ColumnInfo.builder().columnName("task_assignee").attributeName("assignments").attributeType(Task.class).isCollection(true)    .inverseFkColumn("id").build()

With these changes made, the last place to change will be in the select function. This will be applied in the condition check for right after fkColumnInfo.columnName

if (fkColumnInfo.columnName != null) {    if(fkColumnInfo.inverseFkColumn != null){        String query = joinedEntityMetadata.createJoinQueryInverse(metadata.tableName, fkColumnInfo.columnName);        Object joinValue = rs.getObject(fkColumnInfo.inverseFkColumn, joinedEntityMetadata.pkColumn().attributeType);        if (joinValue != null) {            SelectResult<?> fkEntityValue = select(query, new Object[]{joinValue}, joinedEntityMetadata, conn, cache);            if (fkEntityValue.result != null) {                data.set(attributeName, fkEntityValue.result);            }        }    }    else {        String query = joinedEntityMetadata.createJoinQuery(metadata.tableName, fkColumnInfo.columnName);        Object joinValue = rs.getObject(fkColumnInfo.columnName, joinedEntityMetadata.pkColumn().attributeType);        if (joinValue != null) {            SelectResult<?> fkEntityValue = select(query, new Object[]{joinValue}, joinedEntityMetadata, conn, cache);            if (fkEntityValue.result != null) {                data.set(attributeName, fkEntityValue.result);            }        }    }} else {    //....same as before

This same condition needs to be applied in the case where the relationship is 1-to-1

if (metadata.containsFkColumns()) {    for (ColumnInfo fkColumnInfo : metadata.fkColumns()) {        String attributeName = fkColumnInfo.attributeName;        EntityMetadata joinedEntityMetadata = EntityRegistry.registry.get(fkColumnInfo.attributeType);        if (fkColumnInfo.columnName != null) {            if(fkColumnInfo.inverseFkColumn != null){                String query = joinedEntityMetadata.createJoinQueryInverse(fkColumnInfo.inverseFkColumn, fkColumnInfo.columnName);                Object joinValue = rs.getObject(fkColumnInfo.inverseFkColumn, joinedEntityMetadata.pkColumn().attributeType);                if (joinValue != null) {                    SelectResult<?> fkEntityValue = select(query, new Object[]{joinValue}, joinedEntityMetadata, conn, cache);                    if (fkEntityValue.result != null && fkEntityValue.result.size() > 0) {                        data.set(attributeName, new ArrayList<>(fkEntityValue.result).get(0));                    }                }            }    //...rest is the same

The query generation function will also require some tweaking to reflect the semantics of inverse

public String createInverseJoinQuery(String fkColumnTable, String fkColumnName) {    ColumnInfo pkColumnInfo = pkColumn();    String[] query = new String[]{            "select PK_TABLE.* from " + tableName + " PK_TABLE",            "inner join " + fkColumnTable + " FK_TABLE on FK_TABLE." + pkColumnInfo.columnName + " = PK_TABLE." + fkColumnName,            "where PK_TABLE." + fkColumnName + " = ?"    };    return String.join(" ", query);}

And with that, the Task entity (or entities) mapped by the column task_assignee in the tbl_task table gets loaded into the User entity.

composite pk#

This same pattern needs to be used as well in the case where the foreign key is a composite key. This example below illustrates such a scenario

create table if not exists tbl_user_v2 (  email_address varchar(100) not null,  username varchar(50) not null,  nickname varchar(50),  date_joined timestamp not null default now(),  access_level varchar default 'VISITOR',  addr_city varchar(50),  addr_state varchar(30),  addr_postal_code varchar(10),  constraint user_pk_v2 primary key(email_address, username));
create table if not exists tbl_task_v2 (  num int NOT null,  name varchar(50) not null,  done boolean not null default false,  task_created timestamp not null default now(),  parent_task_num int,  parent_task_name varchar(50),  task_assignee_email varchar(100),  task_assignee_uname varchar(50),  constraint task_pk_v2 primary key(num, name),  constraint task_parent_fk_v2 foreign key(parent_task_num, parent_task_name) references tbl_task_v2(num, name),  CONSTRAINT task_assignee_fk_v2 FOREIGN key(task_assignee_email, task_assignee_uname) REFERENCES tbl_user_v2(email_address, username));

The respective class representations will look like this:

class TaskV2 implements Entity {    TaskId taskId;    Boolean completed;    LocalDate dateCreated;    TaskV2 parentTask;    Collection<TaskV2> dependsOn;
    //...the rest is the same as before
class TaskId implements Entity, Comparable<TaskId> {    Integer number;    String name;
    //...the rest is the same as before
class UserV2 implements Entity {    UserId userId;    String nickName;    AccessLevel accessLevel;    Address address;    LocalDate dateJoined;    Collection<TaskV2> assignments;
    //...the rest is the same as before
class UserId implements Entity, Comparable<UserId> {    String emailAddress;    String userName;
    //...the rest is the same as before

To model assignments being in a different table, inverseFK attribute needs to be added to the UserV2 entity's metadata

ColumnInfo.builder().attributeName("assignments").attributeType(Task.class).isCollection(true)    .compositeColumns(new String[][]{{"task_assignee_email", "email_address"}, {"task_assignee_uname", "username"}})    .inverseFkColumn("id").build()
warning

TODO: fill in some details about composite key mapping

Mapping relationship using junction table#

The scenario we shall consider next is when mapping a relationship, be it 1-to-1 or many-to-1, using a join table. What these means is that neither of the entity tables own the foreign key. The foreign key is owned by an intermediary table which references both entity tables, hence aptly nanmed the join table.

create table if not exists tbl_user_v3 (    id UUID default random_uuid(),    email_address varchar(100) not null,    nickname varchar(50),    date_joined timestamp not null default now(),    access_level varchar default 'VISITOR',    addr_city varchar(50),    addr_state varchar(30),    addr_postal_code varchar(10),    constraint user_pk_v3 primary key(id));
create table if not exists tbl_task_v3 (    id UUID default random_uuid(),    name varchar(50) not null,    done boolean not null default false,    task_created timestamp not null default now(),    parent_task UUID,    constraint task_pk_v3 primary key(id),    constraint uniq_name_v3 unique (name),    constraint task_parent_fk_v3 foreign key(parent_task) references tbl_task(id));
create table if not exists tbl_user_task (    task_id UUID not null,    task_assignee UUID not null,    constraint user_task_id_fk foreign key(task_id) references tbl_task_v3(id),    constraint user_task_assignee_fk foreign key(task_assignee) references tbl_user_v3(id));

This new scenario will require additional metadata to describe the join table and the associated columns. As a starting point, let's introduce an additional type of metadata;

class JoinTable {
    String tableName;    String[] onColumns;    String[] whereColumns;    String inverseColumn;
    public JoinTable(String tableName, String[] onColumns, String[] whereColumns, String inverseColumn) {        this.tableName = tableName;        this.onColumns = onColumns;        this.whereColumns = whereColumns;        this.inverseColumn = inverseColumn;    }
    public static JoinTableBuilder builder() {        return new JoinTableBuilder();    }}
//The JoinTable builder for convenienceclass JoinTableBuilder {
    private String tableName;    private String[] onColumns = {};    private String[] whereColumns = {};    private String inverseColumn;
    public JoinTableBuilder tableName(String tableName){        this.tableName = tableName;        return this;    }
    public JoinTableBuilder onColumn(String onColumn){        String[] newArray = new String[onColumns.length + 1];        System.arraycopy(this.onColumns, 0, newArray, 0, onColumns.length);        this.onColumns = newArray;        this.onColumns[onColumns.length - 1] = onColumn;        return this;    }
    public JoinTableBuilder onColumns(String[] onColumns){        this.onColumns = onColumns;        return this;    }
    public JoinTableBuilder whereColumns(String[] whereColumns){        this.whereColumns = whereColumns;        return this;    }
    public JoinTableBuilder whereColumn(String whereColumn){        String[] newArray = new String[whereColumns.length + 1];        System.arraycopy(this.whereColumns, 0, newArray, 0, whereColumns.length);        this.whereColumns = newArray;        this.whereColumns[whereColumns.length - 1] = whereColumn;        return this;    }
    public JoinTableBuilder inverseColumn(String inverseColumn) {        this.inverseColumn = inverseColumn;        return this;    }
    public JoinTable build() {        return new JoinTable(tableName, onColumns, whereColumns, inverseColumn);    }}

Add this to the ColumnInfo class, and update the ColumnInfoBuilder accordingly

JoinTable joinTable;

The User entity will now use this new metadata for its assignment attribute

ColumnInfo.builder().attributeName("assignments").attributeType(Task.class).isCollection(true)    .joinTable(JoinTable.builder().tableName("tbl_task_user").inverseColumn("id").onColumn("task_id").whereColumn("task_assignee").build()).build()

With this plumbing in place, the next place to make adjustment in back in the select function. The select function has grown really big so far, but we'll refactor it soon to hang in there. FOr now, let's tuck in the change necessary to handle this scenario

//check for Collection columnsif (metadata.containsCollectionColumns()) {    for (ColumnInfo fkColumnInfo : metadata.collectionColumns()) {        String attributeName = fkColumnInfo.attributeName;        EntityMetadata joinedEntityMetadata = EntityRegistry.registry.get(fkColumnInfo.attributeType);        if(fkColumnInfo.joinTable != null){            JoinTable joinTable = fkColumnInfo.joinTable;            String query = joinedEntityMetadata.createJoinTableJoinQuery(joinTable);            Object joinValue = rs.getObject(joinTable.inverseColumn, joinedEntityMetadata.pkColumn().attributeType);            if (joinValue != null) {                SelectResult<?> fkEntityValue = select(query, new Object[]{joinValue}, joinedEntityMetadata, conn, cache);                if (fkEntityValue.result != null) {                    data.set(attributeName, fkEntityValue.result);                }            }        }        else if (fkColumnInfo.columnName != null) {            //...the rest is as before

And this requires a new query generator function like this:

public String createJoinTableJoinQuery(JoinTable joinTable) {    ColumnInfo pkColumnInfo = pkColumn();    String[] query = new String[]{        "select PK_TABLE.* from " + tableName + " PK_TABLE",        "inner join " + joinTable.tableName + " FK_TABLE on FK_TABLE." + joinTable.onColumns[0] + " = PK_TABLE." + pkColumnInfo.columnName,        "where FK_TABLE." + joinTable.whereColumns[0] + " = ?"    };    return String.join(" ", query);}

And considering the case where the primary key is a composite key, the relational database might be represented as shown below.

create table if not exists tbl_user_v4 (  email_address varchar(100) not null,  username varchar(50) not null,  nickname varchar(50),  date_joined timestamp not null default now(),  access_level varchar default 'VISITOR',  addr_city varchar(50),  addr_state varchar(30),  addr_postal_code varchar(10),  constraint customer_pk_v4 primary key(email_address, username));
create table if not exists tbl_task_v4 (  num int NOT null,  name varchar(50) not null,  done boolean not null default false,  task_created timestamp not null default now(),  parent_task_num int,  parent_task_name varchar(50),  constraint task_pk_v4 primary key(num, name),  constraint task_parent_fk_v4 foreign key(parent_task_num, parent_task_name) references tbl_task_v2(num, name));
create table if not exists tbl_user_task_v4 (    task_num int not null,    task_name varchar(50) not NULL,    task_assignee_email varchar(100) not null,    task_assignee_uname varchar(50) NOT null,    CONSTRAINT task_fk_v4 FOREIGN key(task_num, task_name) REFERENCES tbl_task_v4(num, name),    CONSTRAINT task_assignee_fk_v4 FOREIGN key(task_assignee_email, task_assignee_uname) REFERENCES tbl_user_v4(email_address, username));

The object equilavent will remain the same as they were. However there will be a change on the EntityMetadata.

//make slight change to JoinTable metadataString[] inverseColumns;String[] compositeColumns;
//Update the JoinTableBuilder to reflect this changepublic JoinTableBuilder inverseColumn(String inverseColumn) {    String[] newArray = new String[inverseColumns.length + 1];    System.arraycopy(this.inverseColumns, 0, newArray, 0, inverseColumns.length);    this.inverseColumns = newArray;    this.inverseColumns[inverseColumns.length - 1] = inverseColumn;    return this;}
public JoinTableBuilder inverseColumns(String... inverseColumns){    this.inverseColumns = inverseColumns;    return this;}
public JoinTableBuilder compositeColumns(String... compositeColumns) {    this.compositeColumns = compositeColumns;    return this;}
public JoinTable build() {    return new JoinTable(tableName, compositeColumns, inverseColumns, onColumns, whereColumns);}
//Update the EntityRegistry to reflect this change alsoregistry.put(UserV4.class, new EntityMetadata(UserV4.class, "tbl_user_v4", List.of(        ColumnInfo.builder().attributeName("userId").attributeType(UserId.class).isCompositePk(true).build(),        ColumnInfo.builder().columnName("nickname").attributeName("nickName").build(),        ColumnInfo.builder().columnName("access_level").attributeName("accessLevel").attributeType(AccessLevel.class).isEnum(true).build(),        ColumnInfo.builder().attributeName("address").attributeType(Address.class).isEmbedded(true).build(),        ColumnInfo.builder().columnName("date_joined").attributeName("dateJoined").attributeType(LocalDate.class).build(),        ColumnInfo.builder().attributeName("assignments").attributeType(TaskV4.class).isCollection(true)                .joinTable(JoinTable.builder().tableName("tbl_user_task_v4").compositeColumns("email_address", "username").inverseColumns("num", "name")                        .onColumns("task_num", "task_name").whereColumns("task_assignee_email", "task_assignee_uname")                        .build()).build())) {
    @Override    public UserV4 entityInstance() {        return new UserV4();    }});

Update the query generator function

public String createJoinTableCompositeJoinQuery(JoinTable joinTable) {    String joinOn = IntStream.range(0, joinTable.onColumns.length)            .mapToObj(i -> new String[]{joinTable.onColumns[i], joinTable.inverseColumns[i]})            .map(joinColumns -> "FK_TABLE." + joinColumns[0] + " = PK_TABLE." + joinColumns[1])            .collect(Collectors.joining(" and "));    String joinWhere = Arrays.stream(joinTable.whereColumns, 0, joinTable.whereColumns.length)            .map(whereColumn -> "FK_TABLE." + whereColumn + " = ?").collect(Collectors.joining(" and "));
    String[] query = new String[]{            "select PK_TABLE.* from " + tableName + " PK_TABLE",            "inner join " + joinTable.tableName + " FK_TABLE on " + joinOn,            "where " + joinWhere    };    return String.join(" ", query);}

The last change for this scenario will one again be in the select function.

if (metadata.containsCollectionColumns()) {    for (ColumnInfo fkColumnInfo : metadata.collectionColumns()) {        String attributeName = fkColumnInfo.attributeName;        EntityMetadata joinedEntityMetadata = EntityRegistry.registry.get(fkColumnInfo.attributeType);        if(fkColumnInfo.joinTable != null){            JoinTable joinTable = fkColumnInfo.joinTable;            if(joinTable.compositeColumns != null) {                String query = joinedEntityMetadata.createJoinTableCompositeJoinQuery(joinTable);                Object[] params = new Object[joinTable.compositeColumns.length];                for (int i = 0; i < params.length; i++) {                    String column = joinTable.compositeColumns[i];                    params[i] = rs.getObject(column);                }                SelectResult<?> compositeFkEntityValue = select(query, params, joinedEntityMetadata, conn, cache);                if (compositeFkEntityValue.result != null) {                    data.set(attributeName, compositeFkEntityValue.result);                }            }            else{                String query = joinedEntityMetadata.createJoinTableJoinQuery(joinTable);                Object joinValue = rs.getObject(joinTable.inverseColumns[0], joinedEntityMetadata.pkColumn().attributeType);                if (joinValue != null) {                    SelectResult<?> fkEntityValue = select(query, new Object[]{joinValue}, joinedEntityMetadata, conn, cache);                    if (fkEntityValue.result != null) {                        data.set(attributeName, fkEntityValue.result);                    }                }            }        }        else if (fkColumnInfo.columnName != null) {            //...the rest is as before

Refactoring the extractEntity function#

Visit Github Resources

Up to this point, we have pilled on changes to the extractEntity function by using if statements to make decisions, and this approach is only good for when the complexity is small. In our case, the complexity of the function is now almost overwhelming, and it would be impossible to continue doing this without running into a brick wall. It's now time to break down that complexity into smaller and singuar-purpose functions.

Let's first review the constraints that the extractEntity function is dealing with

  1. Does the table have a primary key?

    • single column?
    • composite column?
  2. Does the Entity class represent any group of columns using an Embeddable entity? The means that a group of columns in a table row are folded together into an Embedded entity

  3. Does the table have a foreign key relationship with other tables?

  4. How is the foreign key relationship defined?

    • self join, where fk and pk are in teh same table
    • cross join, where fk is in the source table and pk is in the target table
    • join table, where fk is in neight source or target table and is instead in a seperate join table

With these constraints in mind, the overall decision tree can be reduced to look like shown below. Althought the complexity is arguably still high, the previous level of complexity has nevertheless been greatly diminished. This means that there's still room for improvement, and this can be revisited later on.

private static <T extends Entity> T extractEntity(ResultSet rs, ResultSetMetaData rsmeta, EntityMetadata metadata, Connection conn, LocalCache<T> cache) throws SQLException {        T data = metadata.entityInstance();
    if (metadata.containsCompositePk()) {        #if value is cached return it else keep going    }
    if (metadata.containsPkColumn()) {        #if value is cached return it else keep going    }
    //check mapped columns - columns that exist in rs    extractMappedEntityColumns(rs, rsmeta, metadata, data);
    //check embedded columns    if (metadata.containsEmbedded()) {        extractEmbeddedEntityColumns(rs, rsmeta, metadata, conn, cache, data);    }
    if (metadata.containsFkColumns()) {        for (ColumnInfo fkColumnInfo : metadata.fkColumns()) {            if (fkColumnInfo.columnName != null) {                if (fkColumnInfo.inverseFkColumn != null) {                    extractFkColumnWhenIsInverse(rs, conn, cache, data, fkColumnInfo, attributeName, joinedEntityMetadata);                } else {                    extractFkColumnWhenIsNotInverse(rs, metadata, conn, cache, data, fkColumnInfo, attributeName, joinedEntityMetadata);                }            } else {                //must be a composite fk column since 'columnName' is null                if (fkColumnInfo.isInverseComposite) {                    extractCompositeFkWhenIsInverse(rs, metadata, conn, cache, data, fkColumnInfo, attributeName, joinedEntityMetadata);                } else {                    extractCompositeFkWhenIsNotInverse(rs, metadata, conn, cache, data, fkColumnInfo, attributeName, joinedEntityMetadata);                }            }        }    }
    if (metadata.containsCollectionColumns()) {        for (ColumnInfo fkColumnInfo : metadata.collectionColumns()) {            if (fkColumnInfo.joinTable != null) {                JoinTable joinTable = fkColumnInfo.joinTable;                if (joinTable.compositeColumns != null) {                    extractCollectionUsingJoinTableHavingCompositeFkColumn(rs, conn, cache, data, attributeName, joinedEntityMetadata, joinTable);                } else {                    extractCollectionUsingJoinTableHavingFkColumn(rs, conn, cache, data, attributeName, joinedEntityMetadata, joinTable);                }            } else if (fkColumnInfo.columnName != null) {                if (fkColumnInfo.inverseFkColumn != null) {                    extractCollectionHavingInverseFkColumn(rs, metadata, conn, cache, data, fkColumnInfo, attributeName, joinedEntityMetadata);                } else {                    extractCollectionHavingFkColumn(rs, metadata, conn, cache, data, fkColumnInfo, attributeName, joinedEntityMetadata);                }            } else {                //must be a composite fk column                if (fkColumnInfo.isInverseComposite) {                    extractCollectionHavingInverseCompositeFkColumn(rs, metadata, conn, cache, data, fkColumnInfo, attributeName, joinedEntityMetadata);                } else {                    extractCollectionHavingCompositeFkColumn(rs, metadata, conn, cache, data, fkColumnInfo, attributeName, joinedEntityMetadata);                }            }        }    }    return data;}

One subtle problem with the current implementation is that there is no mechanism to limit to how much data a generated query can fetch. Although a developer can apply limits to the parent query, there are times when sql gets generated around relatiohships on the fly, which is outside of the developer's control. There is no built-in machanism to provide these 'limits' to the generated sql. This is a feature that is definately nice to provide, since we are refactoring the core extractEntity function.

I suppose a fair place to start would be to add a new metadata definition

class OffsetLimit {
    int offset;    int limit;    String[] attributes;
    public OffsetLimit(int offset, int limit, String... attributes) {        this.offset = offset;        this.limit = limit;        this.attributes = attributes;    }
    public static OffsetLimitBuilder builder() {        return new OffsetLimitBuilder();    }}
class OffsetLimits {
    List<OffsetLimit> offsetLimits;
    public OffsetLimits(List<OffsetLimit> offsetLimits;) {        this.offsetLimits = offsetLimits;    }
    public static OffsetLimitBuilder builder() {        return new OffsetLimitBuilder();    }}

Let's also add the associated builder for this new metadata class

class OffsetLimitBuilder {
    private int offset = 0;    private int limit = 20;    private String[] attributes;
    public OffsetLimitBuilder offset(int offset) {        this.offset = offset;        return this;    }
    public OffsetLimitBuilder limit(int limit) {        this.limit = limit;        return this;    }
    public OffsetLimitBuilder attributes(String[] attributes) {        this.attributes = attributes;        return this;    }
    public OffsetLimitBuilder attribute(String attribute) {        String[] newArray = new String[attributes.length + 1];        System.arraycopy(this.attributes, 0, newArray, 0, attributes.length);        this.attributes = newArray;        this.attributes[attributes.length - 1] = attribute;        return this;    }
    public OffsetLimit build() {        return new OffsetLimit(offset, limit, attributes);    }}
class OffsetLimitsBuilder {
    private final List<OffsetLimit> offsetLimits = new ArrayList<>();
    public OffsetLimitsBuilder offsetLimit(Function<OffsetLimitBuilder, OffsetLimit> builder) {        offsetLimits.add(builder.apply(new OffsetLimitBuilder()));        return this;    }
    public OffsetLimitsBuilder offsetLimit(int limit, int offset, String... attributes) {        this.offsetLimits.add(OffsetLimit.builder().limit(limit).offset(offset).attributes(attributes).build());        return this;    }
    public OffsetLimits build() {        return new OffsetLimits(offsetLimits);    }}

However, since limit and offset are not fixed values, this new metadata does not belong to the ColumnInfo class. Instead, it can be passed down as an additional argument when starting a select operation, and it can then be used by the query generation functions to apply these limit and offset constraints.

public static <T extends Entity> SelectResult<T> select(String query, Object[] parameters, EntityMetadata metadata) {    return select(query, parameters, metadata, OffsetLimits.builder().build());}
public static <T extends Entity> SelectResult<T> select(String query, Object[] parameters, EntityMetadata metadata, OffsetLimits offsetLimits) {    //update the rest of the affected method signatures

After making adjustments to the respective function afttected by the above change, the limits information should now be available to the query generation functions.

This new metadata is only relevant when associated with an attribute in the Entity class whose type is a Collection. This is how the additonal query string is generated. So to provide an association between a mapped column and the OffsetLimits defined, we need to add a limitColumns attribute to both ColumnInfo and JoinTable classes, and update their corresponding builder accordingly

this.limitColumns = limitColumns;

The EntityMetadata mapping needs to be updated as well. This update should only be necessary for fkColumns and joinColumns

ColumnInfo.builder().attributeName("assignments").attributeType(TaskV4.class).isCollection(true)    .joinTable(JoinTable.builder().tableName("tbl_user_task_v4").compositeColumns("email_address", "username").inverseColumns("num", "name")        .onColumns("task_num", "task_name").whereColumns("task_assignee_email", "task_assignee_uname").limitColumns("num", "name")        .build()).build()

The query generation function will look like shown below

private String createQueryLimit(OffsetLimits limits, String... fkColumns) {    return limits.offsetLimits.stream()        .filter(limit -> Arrays.stream(limit.attributes).allMatch(column -> List.of(fkColumns).contains(column)))        .findFirst()        .map(limit -> String.format(" offset %d limit %d", limit.offset, limit.limit)).orElse("");}

As an illustration of usage, see example below

//you can skip using OffsetLimitsSelectResult<TaskV4> tasks = select("select * from tbl_task_v4", new Object[]{}, EntityRegistry.registry.get(TaskV4.class));if (tasks.error != null) {    System.out.println(tasks.error);} else {    for (TaskV4 entity : tasks.result) {        System.out.println(entity);    }}
//you can make use of OffsetLimits to narrow down size of Tasks collection in the User's assignments' attribute OffsetLimits limits = OffsetLimits.builder().offsetLimit(0, 3, "num", "name").build();SelectResult<UserV4> users = select("select * from tbl_user_v4", new Object[]{}, EntityRegistry.registry.get(UserV4.class), limits);if (users.error != null) {    System.out.println(users.error);} else {    for (UserV4 entity : users.result) {        System.out.println(entity);    }}

Validating the EntityMetadata mapping#

Visit Github Resources

Much of the decision-making in the extractEntity function is based on the state of the EntityRegistry for any given model. If the entity metadata is inaccurate, the outcome of the function will equally be inaccurate. In a much as a developer may pay close attention to creating the metadata correctly, it takes almost no effort to mess it up. A sound strategy would be to provide a means to validate that the metadata is sound. Another sound strategy would be to provide a means to generate the EntityMetadata automatically based on available information. In this section, we'll focus on the first strategy, and in the subsequent section, we'll switch gears and attempt the second strategy.

JDBC Metadata#

The basic mechanism of extracting information from the database schema is made trivial by the JDBC framework. Let begin by defining some classes for the schema metadata

//Table infoclass TableInfo {
    final String tableName;    final Set<TableColumn> tableColumns;    final Set<TablePkColumn> tablePkColumns;    final Set<TableFkColumn> tableFkColumns;
//Column infoclass TableColumn {
    final String columnName;    final int ordinal;
//Pk Column infoclass TablePkColumn {
    final String columnName;    final String pkName;    final int keySeq;
//Fk Column infoclass TableFkColumn {
    final String pkTableName;    final String pkColumnName;    final String pkName;    final String fkTableName;    final String fkColumnName;    final String fkName;    final int keySeq;
//Tables summaryclass JdbcTablesInfo {
    final Set<TableInfo> tablesInfo = new HashSet<>();

With this definitions done, it's now easy to walk through the database metadata and collect the pieces of information the need to be captured for use with validation subsequently

class JdbcMetadata {
    private static final Logger logger = LogManager.getLogger(JdbcMetadata.class);    private static final String connectionString = "jdbc:h2:./data/sample-8.db";
    public static void main(String[] args) {        JdbcTablesInfo meta = extractTableInfo();        System.out.println(meta);        //Do some validations here        validateTableCount(meta);    }
    public static JdbcTablesInfo extractTableInfo() {        JdbcTablesInfo meta = new JdbcTablesInfo();        try (Connection conn = DriverManager.getConnection(connectionString)) {            DatabaseMetaData databaseMetaData = conn.getMetaData();            try (ResultSet resultSet = databaseMetaData.getTables(null, null, null, new String[]{"TABLE"})) {                while (resultSet.next()) {                    TableInfoBuilder builder = TableInfo.builder();                    String tableName = resultSet.getString("TABLE_NAME");                    builder.tableName(tableName);                    extractColumnInfo(tableName, databaseMetaData, builder);                    extractPkColumnInfo(tableName, databaseMetaData, builder);                    extractFkColumnInfo(tableName, databaseMetaData, builder);                    meta.addTableInfo(builder.build());                }            }            return meta;        } catch (SQLException e) {            logger.error(e.getMessage());            throw new RuntimeException(e);        }    }
    private static void extractFkColumnInfo(String tableName, DatabaseMetaData databaseMetaData, TableInfoBuilder builder) {        try (ResultSet columns = databaseMetaData.getImportedKeys(null, null, tableName)) {            while (columns.next()) {                String pkTableName = columns.getString("PKTABLE_NAME");                String pkColumnName = columns.getString("PKCOLUMN_NAME");                String pkName = columns.getString("PK_NAME");                String fkTableName = columns.getString("FKTABLE_NAME");                String fkColumnName =  columns.getString("FKCOLUMN_NAME");                String fkName = columns.getString("FK_NAME");                String keySeq = columns.getString("KEY_SEQ");                builder.tableFkColumn(pkTableName, pkColumnName, pkName, fkTableName, fkColumnName, fkName, Integer.parseInt(keySeq));            }        } catch (SQLException e) {            logger.error(e.getMessage());        }    }
    private static void extractPkColumnInfo(String tableName, DatabaseMetaData databaseMetaData, TableInfoBuilder builder) {        try (ResultSet columns = databaseMetaData.getPrimaryKeys(null, null, tableName)) {            while (columns.next()) {                String columnName = columns.getString("COLUMN_NAME");                String pkName = columns.getString("PK_NAME");                String keySeq = columns.getString("KEY_SEQ");                builder.tablePkColumn(columnName, Integer.parseInt(keySeq), pkName);            }        } catch (SQLException e) {            logger.error(e.getMessage());        }    }
    private static void extractColumnInfo(String tableName, DatabaseMetaData databaseMetaData, TableInfoBuilder builder) {        try (ResultSet columns = databaseMetaData.getColumns(null, null, tableName, null)) {            while (columns.next()) {                String columnName = columns.getString("COLUMN_NAME");                String ordinalPos = columns.getString("ORDINAL_POSITION");                builder.tableColumn(columnName, Integer.parseInt(ordinalPos));            }        } catch (SQLException e) {            logger.error(e.getMessage());        }    }}

With this basic structure, the nexy step is to determine if the EntityRegistry tables are in the database.

watch this space for more content