A Quick Example of JSF Facelet Composite Components

In this post I will demonstrate how to create JSF Composite Components using Facelets.

For this demonstration I will create an Employee Management screen with four views (modes): grid (a list of employees), a tree table with employees grouped by type (see previous post here), detail, and edit.

Only one view will be shown at any one time–by controlling the ‘rendered’ attribute of each component with a mode flag in the page controller bean.  Let us begin by taking a look the skeleton for the Facelet that is to be the composite component:

<?xml version='1.0' encoding='UTF-8' ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:cc="http://xmlns.jcp.org/jsf/composite"
      xmlns:f="http://xmlns.jcp.org/jsf/core"
      xmlns:h="http://xmlns.jcp.org/jsf/html"
      xmlns:p="http://primefaces.org/ui">

    <!-- INTERFACE -->
    <cc:interface>
      
    </cc:interface>

    <!-- IMPLEMENTATION -->
    <cc:implementation>
        
    </cc:implementation>
</html>

The component’s attributes are specified inside the cc:interface tag.  And the body (the rendered part of the component) is specified in the cc:implementation tag.  The attributes can be accessed with with the expression #{cc.attrs.<attribute name>}. 

Employee Grid Component

The grid component requires two attributes: the data model and the current selection–see the following code fragment:

<cc:attribute name="model"
    type="java.util.List"/>
<cc:attribute name="selection"
  type="com.iamcodepoet.blog.entiy.Employee"/>

As the attributes are values, the value type is specified via the type attribute of the cc:attribute tag.  If the parameter were method reference, the method-signature would be used to specify the method signature.

The component body is simply a data table as would normally be used (Primefaces Data Table in this case)–see the following code fragment.

<p:dataTable id="employeeGrid" value="#{cc.attrs.model}"
     var="emp"
     rowKey="#{emp.id}"
     selection="#{cc.attrs.selection}"
     selectionMode="single"
     rowIndexVar="rowIndex"
     scrollable="true"
     scrollRows="10"
     scrollHeight="200"
     liveScroll="true">

  <p:column  headerText=" " style="text-align: left; width: 20px" >
    #{rowIndex + 1}
  </p:column>
  <p:column  headerText="Last Name" >
    #{emp.lastName}
  </p:column>

  <p:column headerText="First Name" >
    #{emp.firstName}
  </p:column>

  <p:column headerText="Type" >
    #{emp.employeeType}
  </p:column>

</p:dataTable>

The component may now be used as follows:

<comp:employeeGrid model="#{employeeManager.model}"
              selection="#{employeeManager.currentRow}"
              rendered="#{employeeManager.gridMode}"/>

The grid component is to be rendered when the gridMode attribute of the employeeManager bean is true.

Employees By Type Component

The component to display employees by type will be a Primefaces TreeTable component.  It will take two attributes–the tree table model and the current selection.  The following code fragment demonstrates the attributes marketup:

<cc:attribute name="model"
      type="org.primefaces.model.TreeNode"/>

<cc:attribute name="selection"
      type="org.primefaces.model.TreeNode"/>

The component body will be the Primefaces TreeTable itself:

<p:treeTable value="#{cc.attrs.model}"
       var="emp"
       selection="#{cc.attrs.selection}"
       selectionMode="single">
  <f:facet name="header">Employees By Type</f:facet>
  <p:column  headerText="Type" 
         style="width: 170px">
    <h:outputText value="#{emp.employeeType}"
            rendered="#{empty emp.lastName}"
            />
  </p:column>
  <p:column  headerText="Last Name" >
    #{emp.lastName}
  </p:column>

  <p:column headerText="First Name" >
    #{emp.firstName}
  </p:column>

  <p:column headerText="Middle Name" >
    #{emp.middleName}
  </p:column>

</p:treeTable>

And the markup on the main page is as follows:

<comp:employeesByTypes model="#{employeeManager.employeesByType}"
          selection="#{employeeManager.currentNode}"
           rendered="#{employeeManager.byTypeMode}"/>

Note that this component is rendered when the byTypeMode flag of the employeeManager bean is true.

The Employee Detail Component

The Employee Detail component takes a single attribute–the model which is an Employee instance:

<cc:attribute name="model" type="com.iamcodepoet.blog.entiy.Employee"/>

The component body is a two column panel grid:

<p:panelGrid columns="2">
  <h:outputText value="ID"/>
  <h:outputText value="#{cc.attrs.model.id}"/>
  
  <h:outputText value="Type"/>
  <h:outputText value="#{cc.attrs.model.employeeType}"/>
  
  <h:outputText value="First Name"/>
  <h:outputText value="#{cc.attrs.model.firstName}"/>
  
  <h:outputText value="Middle Name"/>
  <h:outputText value="#{cc.attrs.model.middleName}"/>
  
  <h:outputText value="Last Name"/>
  <h:outputText value="#{cc.attrs.model.lastName}"/>
   
</p:panelGrid>

This component is to be rendered when the detailMode flag is is true:

<comp:employeeDetail model="#{employeeManager.currentRow}" rendered="#{employeeManager.detailMode}"/>

 The Employee Editor Component

Finally, let us take a look at the employee editor component–which is to be rendered when the editMode flag is on.  This component is very similar to the detail view, except that it renders data input components, rather than just output:

<p:panelGrid columns="2"
      
       style="margin-top: 5px; margin-bottom: 5px">
  <h:outputText value="ID"/>
  <h:outputText value="#{cc.attrs.model.id}"/>

  <p:outputLabel for="firstName"
           value="First Name"/>
  <p:inputText id="firstName"
         value="#{cc.attrs.model.firstName}"/>

  <p:outputLabel for="middleName"
           value="Middle Name"/>
  <p:inputText id="middleName"
         value="#{cc.attrs.model.middleName}"/>

  <p:outputLabel for="lastName"
           value="Last Name"/>
  <p:inputText id="lastName"
         value="#{cc.attrs.model.lastName}"/>

  <p:outputLabel for="employeeType"
           value="Type"/>
  <p:selectOneMenu id="employeeType"
           value="#{cc.attrs.model.employeeType}">
    <f:selectItems value="#{employeeManager.employeeTypes}"/>
  </p:selectOneMenu>

</p:panelGrid>

There are two crucial parts left to allow each component to be rendered at the appropriate time–the bean methods to switch the mode and a toolbar with buttons to call the bean methods:

<p:toolbar id="mainToolbar" >
  <f:facet name="left">
    <p:commandButton value="View By Type"
             ajax="false"
             actionListener="#{employeeManager.onViewByEmployeeType}"
             rendered="#{not employeeManager.byTypeMode and not employeeManager.editMode}"/>
    <p:commandButton value="View Grid"
             ajax="false"
             actionListener="#{employeeManager.onViewGridMode}"
             rendered="#{not employeeManager.gridMode and not employeeManager.editMode}"/>
    <p:commandButton value="Details"
             ajax="false"
             actionListener="#{employeeManager.onViewDetilMode}"
             rendered="#{not employeeManager.detailMode and not employeeManager.editMode}"/>
    <p:commandButton value="Edit"
             ajax="false"
             actionListener="#{employeeManager.onViewEditMode}"
             rendered="#{not employeeManager.editMode}"/>
    <p:commandButton value="Save"
             ajax="false"
             actionListener="#{employeeManager.onSave}"
             rendered="#{employeeManager.editMode}"/>
    <p:commandButton value="Cancel Edit"
             actionListener="#{employeeManager.onCancelEdit}"
             rendered="#{employeeManager.editMode}"
             update="@form"/>
  </f:facet>
</p:toolbar>

All buttons, except the Cancel Edit button, call a listener on the employeeManager to switch the mode.  Note that the Cancel Edit button is an AJAX enabled, and updates form; this is so in order to allow the post to be performed even if there required input components.

The following code fragment shows the bean methods necessary to toggle the mode flags:

public void onViewGridMode(ActionEvent event)
{
  try
  {
    updateMode(MODE_GRID);
  }
  catch (Exception e)
  {
    LOG.error(e.getMessage(),e);
  }
}

public void onViewDetilMode(ActionEvent event)
{
  try
  {
    updateMode(MODE_DETAIL);
  }
  catch (Exception e)
  {
    LOG.error(e.getMessage(),e);
  }
}

public void onViewEditMode(ActionEvent event)
{
  try
  {
    updateMode(MODE_EDIT);
  }
  catch (Exception e)
  {
    LOG.error(e.getMessage(),e);
  }
}

public void onViewByEmployeeType(ActionEvent event)
{
  try
  {
    updateMode(MODE_BY_EMPLOYEE_TYPE);
    if(employeesByType == null)
    {
      createModelByEmployee();
    }
  } 
  catch (Exception e)
  {
    LOG.error(e.getMessage(),e);
  }
}

public void onCancelEdit(ActionEvent event)
{
  try
  {
    mode= previousMode;
  } 
  catch (Exception e)
  {
    LOG.error(e.getMessage(),e);
  }
}

private void updateMode(String newMode)
{
  previousMode = mode;
  mode = newMode;
}

public void onSave(ActionEvent event)
{
  try(EmployeeDao dao = BlogDaoFactory.getInstance().createEmployeeDao())
  {
     if( currentRow.isNewRow())
     {
       dao.insert(currentRow);
     }
     else
     {
       dao.update(currentRow);
     }
     
     mode = previousMode;
     
  } 
  catch (Exception e)
  {
    LOG.error(e.getMessage(),e);
  }
}

The mode value constants are as follows:

protected final static String MODE_GRID = "grid";
protected final static String MODE_DETAIL = "detail";
protected final static String MODE_EDIT = "edit";
private static final String MODE_BY_EMPLOYEE_TYPE = "by-employeetype-model";

Final Screens

The following image shows the the By Employee Type mode:

Employees By Type Mode

The following image shows the Grid Mode:

Grid Mode

In this example, the detail mode is not of much use, as it displays almost the same data as the grid mode.  However, the detail mode could display much more information such as profile photo, salary information, manger/position information , etc.  The following image demonstrates the detail view:

Detail Mode

Finally, let us take a look at the Edit Mode:

employee-manager-edit

Note that the edit mode only has two buttons visible on the toolbar–Save and Cancel Edit.  Displaying the other buttons could allow the user to change modes without fully completing all input fields and put the manager in an invalid state.

This approach allows complexity to be delegated the composite components.  Including all components in the same page would be messy, and difficult to follow.  Consider, on the other hand, the same page with composite components:

<comp:employeeEditor model="#{employeeManager.currentRow}"
                                  rendered="#{employeeManager.editMode}"/>

<comp:employeesByTypes model="#{employeeManager.employeesByType}"
             selection="#{employeeManager.currentNode}"
             rendered="#{employeeManager.byTypeMode}"/>

<comp:employeeGrid model="#{employeeManager.model}"
           selection="#{employeeManager.currentRow}"
           rendered="#{employeeManager.gridMode}"/>

<comp:employeeDetail model="#{employeeManager.currentRow}"
           rendered="#{employeeManager.detailMode}"/>

Further, these components can be re-used in other screens.

That is all, enjoy.

 

Primefaces TreeTable and Java Streams

Java Streams make it easy to group data, which can be leveraged to create a Primefaces TreeTable–where each node represents a group.

In this brief post I will demonstrate how the collector Collectors.groupBy(…) can be used to generate TreeNodes.  I will demostrate this by working with a list of Employees, as I have on previous posts.

Let us begin by looking at the xhtml markup:

<p:treeTable value="#{employeeManager.employeesByType}"
             var="emp">
       <f:facet name="header">Employee By Type</f:facet>
       <p:column  headerText="Type" 
                  style="width: 170px">
           <h:outputText value="#{emp.employeeType}"
                          />
       </p:column>
       <p:column  headerText="Last Name" >
                 #{emp.lastName}
       </p:column>

       <p:column headerText="First Name" >
                #{emp.firstName}
       </p:column>

       <p:column headerText="Middle Name" >
                #{emp.middleName}
       </p:column>

 </p:treeTable>

I will generate a set of employees grouped by type (employeeType property)–that is, a map where the employee type is the key mapped to a list of all employees of that type.  The following code fragment demonstrates the grouping as well as TreeNode generation:

employeesByType = new DefaultTreeNode();
        
employeeList.stream()
         .collect(Collectors.groupingBy(e->e.getEmployeeType()))
        .entrySet().forEach(e->
        {
              createEmployeeTypeTreeNode(e.getKey(),e.getValue());
         });

The first step is to create a root TreeNode which is the bean object referenced by the GUI.  The next step is to reference the employee list stream and collect it using the groupingBy collector–which generates a Map<String,List<Employee>>.  The next step is to create a node for each employee type, and sub-nodes for the List<Employee>–performed by the method named createEmployeeTypeTreeNode(…)–see the following code fragment.

private void createEmployeeTypeTreeNode(String type, List<Employee> employees)
{
        Employee data= new Employee();
        data.setEmployeeType(type);
        TreeNode node = new DefaultTreeNode(data, employeesByType);
        employees.forEach(emp->new DefaultTreeNode(emp, node));
}

Note that this method creates an employee with only a type–the map key–which is the data object for the employee type node.  Finally, the employee list is converted to a TreeNode–a child of the type node.  The following image shows the resulting tree table:

Employee Tree-Table

Personally, I do not care for the sub-items repeating the employee type.  I will remove this by adding a render condition to the type column content, such that it is only rendered for the type node–I will simply check for an empty last name; I could have chosen various other attributes that indicate the empty employee type node.

<p:column  headerText="Type" 
                  style="width: 170px">
      <h:outputText value="#{emp.employeeType}"
                 rendered="#{empty emp.lastName}"
       />
</p:column>

The following image shows new table:

Employee Tree Table-2

The table is now rendered in a cleaner manner.

That wraps it up–Enjoy.

Leveraging a Consumer to load JDBC ResultSets

It is typical to have DAOs to have query methods with the following signature–or something like it: List<Employee> query().

It is implied that the method iterates over the ResultSet rows, and creates instance of the Employee class for each row–which gets added to the list.

This is often the desired result–or perhaps some other collection.  Consider however, that we wish to write the rows to  a file or some arbitrary output stream–such as for a RESTful web service that produces the table rows.  Creating Employee instances only to iterate over all of them again to write them to some output stream could be costly–specially for large results.

In this post a will show a quick modification to a DAO query method that satisfies both cases–creating a list of instance of the table entity class as well as writing to some output stream.  I will do so by making use of a Consumer–such as java.util.function.Consumer.  It is important to note that the java Consumer‘s accept(T) method does not declare exceptions to be thrown.  As performing operations with most java.jdbc interface methods requires at least the java.sql.SQLEXception to be caught or declared to be thrown, I will define a new consumer–JdbcConsumer; see the following code fragment:

public interface JdbcConsumer<T>
{

    void accept(T t) throws SQLException;
}

Now we can perform JDBC operations, and leave it to the client to decide how SQLExceptions should be handled.

The modified query method looks like this:

public void query(String whereClause, Object[] parameters, JdbcConsumer<ResultSet> rowConsumer) throws SQLException
{
  String sql = createSelectSql(whereClause);
  try (PreparedStatement stmt = connection.prepareStatement(sql))
  {
    if (parameters != null)
    {
      for (int i = 0; i < parameters.length; i++)
      {
        stmt.setObject(i + 1, parameters[i]);
      }
    }

    try (ResultSet rst = stmt.executeQuery())
    {
      while (rst.next())
      {
        rowConsumer.accept(rst);
      }
    }
  }

}

Note that, by itself, this method does not do much of use.  The line rowConsumer.accept(rst) expects the consumer passed to perform whatever operation it wants with the ResultSet parameter.

Now I will define a new query method to take the place of the original method that returns a List–of Employee in this case; see the following code fragment:

public List<Employee> query(String whereClause, Object[] parameters) throws SQLException
{
  List<Employee> list = new ArrayList<>();
  JdbcConsumer<ResultSet> consumer = (rst) ->
  {
    list.add(createInstanceFromResultSet(rst));
  };

  query(whereClause, parameters, consumer);

  return list;
}

The consumer simply creates Employee instances from the row, and adds it to the list.  The following code fragment shows the method createInstanceFromResultSet(…):

private Employee createInstanceFromResultSet(final ResultSet rst) throws SQLException
{
  Employee emp = new Employee();
  int i = 1;
  emp.setId(rst.getInt(i++));
  emp.setFirstName(rst.getString(i++));
  emp.setMiddleName(rst.getString(i++));
  emp.setLastName(rst.getString(i++));
  emp.setEmployeeType(rst.getString(i++));
  emp.setHireDate(LocalDate.ofEpochDay(rst.getLong(i++)));
  emp.setTerminationDate(LocalDate.ofEpochDay(rst.getLong(i++)));
  emp.setDateOfBirth(LocalDate.ofEpochDay(rst.getLong(i++)));
  return emp;
}

Let us now consider that we wish to write the Employee first, middle, and last names to a file instead:

try(EmployeeDao dao= BlogDaoFactory.getInstance().createEmployeeDao();
     Writer writer = new FileWriter("C:/tmp/employees.csv"))
{
  dao.query(null, new Object[0], 0, 0, employeeCsvWriter(writer));
}

The consumer to write the row to a file is as follows:

private JdbcConsumer<ResultSet> employeeCsvWriter(Writer writer)
{
  return rst->
  {
    int i = 2;
    try
    {
      writer.write(rst.getString(i++));
      writer.write(FIELD_SEPARATOR);
      writer.write(rst.getString(i++));
      writer.write(FIELD_SEPARATOR);
      writer.write(rst.getString(i++));
      writer.write(LINE_SEPARATOR);
    }
    catch(IOException e)
    {
      throw new RuntimeException(e.getMessage(), e);
    }
  };
}

The JdbcConsumer could just as well write the content in any other format.

Primefaces DataTable with Dynamic Columns

The Primefaces DataTable component allows for dynamic columns to be rendered.  In this post, I will demonstrate this feature by creating a Pivot Table.  The desired outcome is a table as shown in the following image:

pivot-table2

The Primefaces (JSF) market is as follows:

<p:dataTable value="#{pivotTableBean.data}"
                        var="row">
       <p:columns value="#{pivotTableBean.columns}" var="column" 
                  columnIndexVar="columnIndex">
           <f:facet name="header">
              <h:outputText value="#{column}"/>
           </f:facet>
           <h:outputText value="#{row[columnIndex]}"/>
               </p:columns>
 </p:dataTable>

I will use the random employee generator demonstrated in this article.  The objective of the pivot table is to group the employees by age group (rows) and employee type (columns).  The employee list will be generated as in the following code fragment:

List<Employee> employees = Stream.generate(new RandomEmployeeSupplier())
                    .limit(generateCount).collect(Collectors.toList());

The next step is to generate the columns and add the row header (Age Group)–see the following code listing:

columns = employees.stream().map(e->e.getEmployeeType())
                .distinct().sorted()
                .collect(Collectors.toList());
        columns.add(0, "Age Group");

It is this list of columns that will be used to populate the table’s columns.

For the purpose of this demonstration, the rows (age groups) will be the employees age decade (20s, 30s, etc).  The rows will be generated as follows:

List<String> rows = employees.stream().map(rowSupplier())
                .distinct().sorted()
                .collect(Collectors.toList());

///
private Function<Employee, String> rowSupplier()
    {
        return e ->
        {
            long year = ChronoUnit.YEARS.between(e.getDateOfBirth(), LocalDate.now());

            if (year <=20)
            {
                return "20 or Younger   ";
            }

            int decade = (int) year / 10 * 10;
            return decade + "s";
        };
    }

Finally, to generate the cell values, I will iterate over all rows and for each row:

  1. Get all employees that match the row
  2. Iterate over all columns and count the number employees who’s age group matches the row’s filtered employee list.

The following code fragment shows the code to create the cell values:

data.clear();

for (String row : rows)
{
         Object[] record = new Object[columns.size()];

         List<Employee> filtered = employees.stream()
                 .filter(e -> rowSupplier().apply(e).equals(row))
                 .collect(Collectors.toList());
         record[0] = row;
         for (int i = 1; i < columns.size(); i++)
         {
             String column = columns.get(i);
             double value = filtered.stream()
                     .filter(e -> e.getEmployeeType().equals(column))
                     .count();
             record[i] = value;
         }
         data.add(record);
     }
 }

For the purpose of seeing the number of columns generated change, I have added an input box for the number employees to generate, and the following code to be executed by a command button:

try
{
     List<Employee> employees = Stream.generate(new RandomEmployeeSupplier())
              .limit(generateCount)
              .collect(Collectors.toList());
      pivot(employees);
      
     LOG.info("Generated employees: " + employees.size());
} 
catch (Exception e)
{
    LOG.error(e.getMessage(),e);
}

Full Pivot Table

There is a fundamental flaw with this approach: the original list of employees from which each cell’s value was derived is lost.  If one wishes to click on a cell and view the corresponding employees, the full list must be filtered again.  A different approach that preserves each group will be the topic of an upcoming post.

Generating Streams with Stream.generate(…)

The method Stream.generate(…) can be used to generate random and fixed streams of arbitrary data.

Let us consider that we wish to generate an SQL INSERT statement from a table and list of column names.  For example:  INSERT INTO employees (id,firstName,middleName,lastName,dateOfBirth) VALUE(?,?,?,?,?).  The following code fragment shows a method that generates a stream of ‘?’ corresponding to the length of the column list provided.

 private String createInsertSql(String tableName,String ... columns)
    {
        String values = Stream.generate(()->"?")
                .limit(columns.length)
                .collect(Collectors.joining(","));
        String columnList=Stream.of(columns)
                .collect(Collectors.joining(","));
        return String.format("INSERT INTO %s (%s) VALUES(%s)"
                                    , tableName,columnList,values);
    }

While there are many excellent ways of creating such a string, generating a stream makes it quite succinct.

The generate method takes a Supplier<T> that performs the actual generation–()->”?” in the previous example.  The Supplier can be as simple or complex as necessary.

Consider that we wish to generate random objects the following class:

public class Employee
{
    private int id;
    private String firstName;
    private String middleName;
    private String lastName;
    private LocalDate hireDate;
    private LocalDate terminationDate;
    private LocalDate dateOfBirth;

   //rest of class ommited
}

Let us create a class to generate a random employees (RandomEmployeeSupplier), that implements Supplier<Employee>The following code fragment shows the implementation of the get() method of the Supplier interface.

String first;
        String middle;

        if (rand.nextInt(100) >= 51)
        {
            first = DataFactory.getFemaleFirstName().getWord();
            middle = DataFactory.getFemaleFirstName().getWord();
        } else
        {
            first = DataFactory.getMaleFirstName().getWord();
            middle = DataFactory.getMaleFirstName().getWord();
        }

        String last = DataFactory.getLastName().getWord();

        LocalDate dob = getRandomDateOfBirth();
        LocalDate hireDate = getRandomHireDate(dob);

        LocalDate termDate = null;
        long age = ChronoUnit.YEARS.between(currentDate,dob);
        if (rand.nextInt(100) > 80 || age >= 70)
        {
            termDate = getRandomTermDate(hireDate);
        }

        Employee emp = new Employee();
        emp.setFirstName(first);
        emp.setMiddleName(middle);
        emp.setLastName(last);
        emp.setDateOfBirth(dob);
        emp.setHireDate(hireDate);
        emp.setTerminationDate(termDate);

        return emp;

Note that for generating names, the Data Factory Library was used–available here on SourceForge and discussed here on a previous article (Data Factory Java API).  Arbitrary rules are used to generate values for the date of birth, hire date, as well as termination dates.

We can now use the Stream.generate(…) method to generate 10 random employees:

Stream.generate(new RandomEmployeeSupplier())
                   .limit(10)
                   .map(employeeToStringMapper())
                   .forEach(System.out::println);

The output is as follows:

Tesh, Enrique Porfirio; DOB: 1930-11-17; Employment: 2008-04-24 - 
Buseman, Cherri Idell; DOB: 1936-03-07; Employment: 2013-03-20 - 
Laird, Emerald Dona; DOB: 1942-09-24; Employment: 2007-04-16 - 
Pervis, Dierdre Sierra; DOB: 1991-10-18; Employment: 2012-02-15 - 2015-06-10
Jarding, Donte Andre; DOB: 1925-08-01; Employment: 1990-10-29 - 
Gnatek, Fabian Shannon; DOB: 1934-02-18; Employment: 1965-04-16 - 
Mroz, Dario Kirby; DOB: 1926-07-24; Employment: 2010-10-27 - 
Vanostberg, Frederic Eddy; DOB: 1933-11-29; Employment: 1972-02-12 - 
Mcbratney, Kenny Robin; DOB: 1992-09-13; Employment: 2014-06-19 - 2017-06-20
Tretera, Shaunte Talisha; DOB: 1951-07-16; Employment: 1969-06-23 - 
Calvin, Allan Long; DOB: 1917-11-30; Employment: 1984-06-11 - 
Charriez, John Milo; DOB: 1997-10-27; Employment: 2015-07-11 - 
Plue, Jeannie Kallie; DOB: 1998-06-08; Employment: 2017-04-18 - 2017-08-15
Edmunds, Pamila Emiko; DOB: 1974-01-05; Employment: 1997-05-18 - 
Lindorf, Clement Francesco; DOB: 1946-02-12; Employment: 2001-05-27 - 
Logalbo, Willie Marvin; DOB: 1928-04-12; Employment: 1965-10-05 - 
Consolini, Carmine Lonnie; DOB: 1924-09-05; Employment: 1957-07-13 - 
Mccullers, Laree Gisela; DOB: 1971-04-12; Employment: 2017-03-05 - 
Smolder, Charolette Carita; DOB: 1919-03-25; Employment: 2010-03-10 - 
Crosten, Lorenzo Hank; DOB: 1994-10-14; Employment: 2012-08-12 -

The following code fragment shows employee-to-string mapper:

public Function<Employee, String> employeeToStringMapper()
    {
        return e ->
        {
            String term = e.getTerminationDate() == null ? "" : e.getTerminationDate().toString();
            return String.format("%s, %s %s; DOB: %s; Employment: %s - %s",
                    e.getLastName(), e.getFirstName(), e.getMiddleName(),
                    e.getDateOfBirth(), e.getHireDate(), term);
        };
    }

It was not necessary to create a separate employee supplier class–this was simply a choice due to the length/complexity of the implementation.

The RandomEmployeeSupplier is provided below–Enjoy.

public class RandomEmployeeSupplier implements Supplier<Employee>
{
    private final Random rand = new Random();
    private final int minBirthYear;
    private final int maxBirthYear;
    private final LocalDate currentDate;

    public RandomEmployeeSupplier()
    {
        this(LocalDate.now());
    }

    public RandomEmployeeSupplier(LocalDate currentDate)
    {
        this.currentDate = currentDate;
        this.minBirthYear = currentDate.getYear() - 100;
        this.maxBirthYear = currentDate.getYear() - 18;
    }

    @Override
    public Employee get()
    {
        String first;
        String middle;

        if (rand.nextInt(100) >= 51)
        {
            first = DataFactory.getFemaleFirstName().getWord();
            middle = DataFactory.getFemaleFirstName().getWord();
        } else
        {
            first = DataFactory.getMaleFirstName().getWord();
            middle = DataFactory.getMaleFirstName().getWord();
        }

        String last = DataFactory.getLastName().getWord();

        LocalDate dob = getRandomDateOfBirth();
        LocalDate hireDate = getRandomHireDate(dob);

        LocalDate termDate = null;
        long age = ChronoUnit.YEARS.between(currentDate,dob);
        if (rand.nextInt(100) > 80 || age >= 70)
        {
            termDate = getRandomTermDate(hireDate);
        }

        Employee emp = new Employee();
        emp.setFirstName(first);
        emp.setMiddleName(middle);
        emp.setLastName(last);
        emp.setDateOfBirth(dob);
        emp.setHireDate(hireDate);
        emp.setTerminationDate(termDate);

        return emp;
    }

    private LocalDate getRandomDateOfBirth()
    {
        Year year = getRandomYear(minBirthYear, maxBirthYear);
        Month month = getRandomMonth(1, 12);

        int maxDays = month.length(year.isLeap());
        int day = getRandomDayOfMonth(1, maxDays);

        return LocalDate.of(year.getValue(), month, day);
    }
    
    private Year getRandomYear(int min, int max)
    {
        int i = DataFactory.genNumberInRange(min, max);
        return Year.of(i);
    }
    
    private int getRandomDayOfMonth(int min, int max)
    {
        int days = DataFactory.genNumberInRange(min, max);
        if(days == 0 )
        {
            days = 1;
        }
        
        return days;
    }
    
    public Month getRandomMonth(int min, int max)
    {
        int i = DataFactory.genNumberInRange(min, max);
        if(i == 0)
        {
            i = 1;
        }
        else if(i > 12)
        {
            i = 12;
        }
        return Month.of(i);
    }

    private LocalDate getRandomHireDate(LocalDate dob)
    {
        int maxYear = currentDate.getYear();
        int minYear = dob.getYear() + 18;

        Year year = getRandomYear(minYear, maxYear);

        int maxMonth = (year.getValue() == currentDate.getYear()) ? currentDate.getMonthValue() : 12;
        Month month = getRandomMonth(1, maxMonth);

        int maxDays = month.length(year.isLeap());
        int dayOfMonth = getRandomDayOfMonth(1, maxDays);

        return LocalDate.of(year.getValue(), month, dayOfMonth);
    }

    private LocalDate getRandomTermDate(LocalDate hireDate)
    {
        int maxYear = currentDate.getYear();
        int minYear = hireDate.getYear();

        Year year = getRandomYear(minYear, maxYear);
        boolean sameYear = year.getValue() == hireDate.getYear();
        int minMonth = (sameYear) ? hireDate.getMonthValue() : 1;
        int maxMonth = (year.getValue() == currentDate.getYear()) ? currentDate.getMonthValue() : 12;
        Month month = getRandomMonth(minMonth, maxMonth);

        boolean sameMonth = month.getValue() == hireDate.getMonthValue();
        int minDays = (sameMonth && sameYear) ? hireDate.getDayOfMonth() : 1;
        int maxDays = month.length(year.isLeap());
        int dayOfMonth = getRandomDayOfMonth(minDays, maxDays);

        return LocalDate.of(year.getValue(), month, dayOfMonth);
    }

}

 

 

Java Server Faces (JSF) & Java 8: SelectItem

In this post, I will demonstrate how Java 8 Streams can be leveraged to generate drop-down options (SelectItems) in a Java Server Faces (JSF) page.

Let us first consider that we have a table and corresponding Java class called EmployeeType with, at at a minimum, the columns id and name. More so, we have a DAO called EmployeeTypeDao with a method named query() that returns a list of EmployeeType instances.

List<EmployeeType> list=dao.query();

Our objective is to convert this list of employee types into a list of select options for a drop-down menu in  JSF page.  For example:

<h:outputLabel value="Employee Type"

     for="employeeType"/>

<h:selectOneMenu id="employeeType"

                 value="#{employeeTypesBean.selectedTypeId}">

   <f:selectItems 

           value="#{employeeTypesBean.employeeTypeItems}"/>

</h:selectOneMenu>

To accomplish this, the list of employee types must be converted to list of SelectItems, that is,  List<javax.faces.model.SelectItem>.

try(EmployeeTypeDao dao= new EmployeeTypeDao())

 {
       employeeTypeItems = dao.query("").stream()
              .map(e->new SelectItem(e.getId(),e.getName()))
              .collect(Collectors.toList());
 }

 

The method map of a stream takes a mapping Function that takes an argument of type T (EmployeeType in this case) an returns a value of type R (SelectItem in this case).  Note that e is the argument (of type EmployeeType) being passed to the function  and the -> operator indicates that whatever follows is the function body (that returns an instance of SelectItem); since the function body is a single line, it is not necessary to include the return keyword, nor to include the body in braces ({}).

The rendered drop-down menu look like this:

Employee Type Select

Let us now consider that we wish that sort the list by name:

employeeTypeItems = dao.query("")
               .stream()
               .sorted((a,b)->a.getName().compareTo(b.getName()))
               .map(e->new SelectItem(e.getId(),e.getName()))
               .collect(Collectors.toList());

The method sorted takes a Comparator.  The arguments a and b are the arguments to be passed to the method compare of the Comparator interface.  In this case, items are ordered by name.  The list now looks like this:

Items Sorted By Name

This is great, but now interns (alphabetically) are second on the list.  It is often the case that select items must be sorted in a specific way–other than alpha-numeric.  Consider that the table (and class) EmployeeType also has a column called sortOrder that indicates the order in which these items must be sorted.  Let us thus sort by this column, and by name as a secondary sort column:

private void loadEmployeeTypeItems()
{
    try(EmployeeTypeDao dao= new EmployeeTypeDao())
    {
       employeeTypeItems = dao.query("").stream()
                 .sorted(employeeTypeComparator())
                 .map(e->new SelectItem(e.getId(),e.getName()))
                 .collect(Collectors.toList());
   catch (Exception e)
   {
       throw new RuntimeException(e.getMessage(), e);
   }
}

private Comparator<EmployeeType> employeeTypeComparator()
{
   return (a, b)-> 
   {
        int comp=Integer.compare(a.getSortOrder(), 
                                b.getSortOrder());
        if(comp==0)
        {
            return a.getName().compareTo(b.getName());
        }
        return comp;
    };
}

In the interest of keeping the main method cleaner, the Comparator has been implemented as its on method–which returns a Comparator instance.  The list now looks like this–coincidentally, as it did originally, simply due to the fact that it was originally entered in the desired order.

Custom Sort SelectItems

As a final exercise, let us consider the EmployeeType table (and class) have a column name parentId, indicating a hierarchical structure of employee types–with top level items (i.e. category name) having parentId=0.

List<EmployeeType> list = dao.query("");
employeeTypeItems=list.stream()
         .filter(e->e.getParentId()==0)
         .sorted(employeeTypeComparator())
         .map(e->
         {
            SelectItemGroup group=  new 
                 SelectItemGroup(e.getName());
            SelectItem[] items=list.stream()
                     .filter(c->c.getParentId()==e.getId())                               
                    .sorted(employeeTypeComparator())
                     .map(c->new SelectItem(c.getId(),c.getName()))  
                     .collect(Collectors.toList())
                     .toArray(new SelectItem[]{});
            group.setSelectItems(items);
            return group;
         })
         .collect(Collectors.toList());

 The first step is to filter the root items (i.e. parentId=0).  Having sorted the items as before, the stream is mapped to a SelectItemGroup, rather than a SelectItem as before.  A SelectItemgGroup takes an array of SelectItems.  This is accomplished by filtering the original list by elements whose parentId is the same as the current element’s id property value–c.getParentId()==e.getId(). The filtered result is then sorted, and mapped to a SelectItem much like before.

The drop-down menu now looks like this:

Grouping SelectItems

That wraps her all up.  Enjoy!

Data Factory Java API

As far as  I can recall in my career as a software developer, I have had the need to generate test data for my applications. Whilst it seems trivial, and as such have often resorted to generating record after  record of Ada Byron and Charles Babbage when working the tables that require a first name and last name , populating tables with data that closely resembles production data is often crucial.  An even greater offense has been the lazy pawing of the keyboard (a;sldkfjas;ldkjf)  as input for text fields.

Though I am certain  Java APIs exist out there that generate random data, I have have yet to find one–in particular one that not only allows me to generate data to insert into my test tables,  but also load into my classes–and which I can control directly from code.  As such I decided to create my own. I have placed it on SourceForge

I have used various sources for the underlying data sets.  The data files, as well as README files indicating the sources, are located in the src folder in the source code.

Section 1. Generating Basic Content

To generate data, use the DataFactory utility class, located in the com.baseprogramming.dev.gen package. There are various types of data that can be generated, let us start with simple text generation.

String string=DataFactory.genString(20);
String words=DataFactory.genWords(20);
System.out.println("String:" + string);
System.out.println("Words: " + words);

 

That is it. The output is

String:immunogenetical beet
Words: rhinencephalon fichus come apart accidence asleep unstatical 
       skip zone
       unnicknamed hunk woolly rhinoceros accidie psychometrics press 
       photographer videophone Araucan sepaled wheelhouse authoring
       language nonsciatic Farrar

Generating random dates is also simple.

Date first=DataFactory.genDate();
Date second=DataFactory.genDate(, );
Date third=DataFactory.genDateBetween(first, second);
SimpleDateFormat fmt= new SimpleDateFormat("yyyy-MM-dd");

System.out.printf("First: %s, Second: %s, Between First and Second:  
               %s\n",fmt.format(first),fmt.format(second),fmt.format(third));

The output is:

First: 2012-10-14, Second: 1990-07-11, Between First and Second: 1992-10-02

For the sake of completeness, the DataFactory utility generates numbers–though these can be generated using Java’s features directly.

Section 2. Generating Names

Currently, the DataFactory utility class generates a wide range of first and last names–though these are mostly western names.  Also, it is necessary to load the names explicitly. I have contemplated loading all data automatically, but this may take a while to run–especially as the data sets continue to grow. To load the names, use the loadAllCensusNames() method:

DataFactory.loadAllCensusNames()

The methods needed to generate names are: getFirstNameAnyGender(), getFemaleFirstName(), getMaleFirstName(), and getLastName().

These methods do not return a String, as do the ones that generate basic text.  Rather, these return a Term, which is a simple class that holds a word, as well as a list of categories to which the word belongs.  Use the getWord() method of the Term class to get the actual word–as a String. Consider the following example that generates a simple record for a person.

int count=;
DataFactory.loadAllCensusNames();
for(int i=;i < count;i++)
{
    String firstName=DataFactory.getFirstNameAnyGender().getWord();
    String lastName=DataFactory.getLastName().getWord();
    Date dob=DataFactory.genDate(, );
    SimpleDateFormat fmt= new SimpleDateFormat("yyyy-MM-dd");
    System.out.printf("%s,%s,%s\n",firstName,lastName,fmt.format(dob));
}

The output is:

Fallon,Proescher,1969-06-06
Cleopatra,Barbor,1945-09-30
Denny,Ankeny,1989-12-06
Rosio,Pilato,1986-07-27
Efren,Sniffen,1982-11-12
Lakia,Lua,2010-06-05
Corina,Homles,1979-03-22
Desirae,Hillin,1954-09-05
Rashida,Cinque,1989-03-22
Rowena,Tibbert,1957-07-19

The method loadAllDefaultData() can be used to load all the embedded data in one shot. Alternatively, the loaders–called by this method–can be called individually:

public static void loadAllDefaultData()
{
  loadTerms();
  loadCities();
  loadStates();
  loadAnatomicalStructures();
  loadMedicalSymptoms();
  loadDeseaseCauses();
  loadMedicalTreatments();
  loadAllCensusNames();
}

I will provide more examples at a later time, but this should be enough to get your started. I hope you find it useful–Enjoy

Creating Java Deliverables with Apache Ant

Deliverables for Java projects often include three components (jars): The component (or library) jar, the sources jar, and the javadoc jar.  The javadoc provides the IDE with content necessary to provide interactive/on-demand documentation.  The  sources jar contains the java source code which can also be useful for a client(that is, an API client programmer), especially when stepping through the code.

In this–brief–post, I will demonstrate how to create such deliverable components using Apache Ant.  Note that I will use NetBeans for this example–this should be largely irrelevant, except perhaps for some default property values as defined by NetBeans.

When Creating an Ant script, one of the first steps is to create properties to hold values for later use.  Note that properties are not exactly variables–properties are immutable.  In this case, the properties are need are as follows:

  1. Deliverable Name.  This should be the deliverable’s actual name.  The final jar name will contain additional information such as version, and type (i.e. javadoc, sources).
  2. Deliverable Directory.  The directory where all deliverable components are to be stored.
  3. Version.  Version number information–generally in the form x.x.x for major, minor, and patch version number.
  4. Version Directory.  A deliverable’s main directory may contain any number of version sub-directories.  A new sub-directory should be created for each version.
  5. Root Name. For this case, root name refers file name portion including the deliverable directory, version directory, deliverable name, and version number.

In a NetBeans project, the build process can be customized by editing the build.xml file located in your projects root directory.

By default, a build file may look like this:

<project name="BlogDemos" default="default" basedir=".">
     <description>Builds, tests, and runs the project BlogDemos.</description>
    <import file="nbproject/build-impl.xml"/>
</project>

In this case, the build process is carried out as specified in the build file nbprojects/build-imp.xml which is imported in the script.

To customize the process, simply add the necessary targets (and properties as needed).  The following fragment setups up the necessary properties:

<property name="deliverable.name" value="blog-demos"/>
    <property name="deliverable.dir"
               value="deliverables\${deliverable.name}\"/>
    <property name="version" value=".."/>
    <property name="version.dir"
               value="${deliverable.dir}\${version}"/>
    <property name="root.name"
            value="${version.dir}/${deliverable.name}-${version}"/>

Note that I could have created names as single string–and perhaps hard-coded the values directly in the Ant tags as needed.  However, breaking  them up, and defining them as properties provides greater flexibility if any value is to change.

Next I will create the javadoc:

<target depends="-javadoc-build"
                  description="bundle javadoc in a jar"
                  name="package-doc">
       <jar basedir="dist/javadoc"
              destfile="${root.name}-javadoc.jar"/>
 </target>

In actuality, this Ant target does not create the javadoc.  By specifying that this target depends on –javadoc-build, Ant executes the javadoc builder, which places the generated content in the dist/javadoc directory (the dist directory is located in the project’s root directory).  After the -javadoc-build is executed, my target takes those results, and packages them in a jar–using the root name with the -javadoc.jar suffix.

The next step is to create the sources jar.

<target description="bundle sources in a jar"
            name="package-sources">
     <jar basedir="src"
          destfile="${root.name}-sources.jar"/>
</target>

This target simply takes the contents of the src directory, and packages them into a jar.  The src directory is where all java sources files are contained–and is also located in the project’s root directory.

Defining Ant targets (by itself) does nothing–these must be executed–some how.  one handy way is to hook them (as dependencies) to the build process.  One such option is to override the -post-jar target.  See the following code fragment.

<target name="-post-jar" depends="package-sources, package-doc"  >
         <copy tofile="${root.name}.jar" file="${dist.jar}" />
 </target>

Note that this target has package-sources and package-doc as dependencies–which causes Ant to execute those targets first.  Finally, it takes that actual file, and copies it to the version folder.  The ${dist.jar} property refers to an entry in the NetBeans project properties file–which indicates the location and name of the jar file generated after running the build process.  By default, this is the project name.

The output file structure is as follows:

  deliverables
     +--blog-demos
        +--0.1.0
            |--blog-demos-0.1.0-javadoc.jar    
            |--blog-demos-0.1.0-sources.jar
            |--blog-demos-0.1.0.jar

This is nice, but we can go a step further and add a build number.  There are two steps necessary to include a build number.  1) create an empty file in your project’s root directory called builder.number and 2) call the Ant Task <buildnumber/>–this task looks for a file called build.number in which it maintains build number sequence.  I will create this in a separate task, which I shall call setup:

<target name="setup">
   <delete dir="${version.dir}"/>
   <buildnumber/>
   <property name="root.name"
          value="${version.dir}/${deliverable.name}-${version}.${build.number}"/>
</target>

and will add this target as a dependency to the -post-jar target.

The dependency string should look as follows: depends="setup,package-sources, package-doc" . Furthermore, notice that I have moved root.name property definition to the setup task. This allows me to include the build number–after it is generated, and have included a line to delete the existing files–for good measure.

Now, this is better, but the is a small issue:  Every time the project is built, a new build will be generated—this can get annoying.  As an alternative, the target name can be changed (from -post-jar).  An Ant target can be run manually in several ways:

  1. Select Run Target from the context menu by right-clicking on the build file (i.e. build.xml) in the Files Explorer Panel (this is the panel generally next to the Projects Panel).  Custom targets are located in a sub-menu called Other Targets.
  2. By right-clicking on the build file content in the editor and also select Run Target and proceed as step 1.
  3. Create a shortcut by right-clicking on the target of choice in the build file Navigator Panel (generally located below  the Files Panel).  This panel lists all the targets of the selected build file.  A brief wizard will aid you in creating the shortcut.

The following illustration shows the Files and Navigator panels in NetBeans.

netbeans-ant-task-shortcuts-300x284

The following illustration shows the context menu for running a task manually (having right-clicked on the editor with an open build file, or the build file itself.

netbeans-run-ant-task-300x257

As you can see, a little goes a long way with Ant.  Stay tuned for further posts on Apache Ant.  Enjoy.

Fun with Reflection (Java)

Reflection is (in short) the ability to inspect and modify the state of a class (via its fields) as well as execute methods and create class instances.  Reflection methods are available via an object’s class, for example:

Class zclass =Class.forName("com.baseprogramming.model.Person");
Field field=zclass.getDeclaredField("firstName");

You will note that a class has getDeclaredFields() and getFields(). The difference is that getDeclaredFields() has access to private nembers, but not inhereted members, whilst getFields() has access to inhereted members, but not private members.

Having obtained a field, use the set(…) method to set the value, and the get() method to get the field’s value.

You will note that a class has getDeclaredFields() and getFields(). The difference is that getDeclaredFields() has access to private nembers, but not inhereted members, whilst getFields() has access to inhereted members, but not private members.

Having obtained a field, use the set(…) method to set the value, and the get() method to get the field’s value.

Person person= new Person();
Class zclass=person.getclass();

Field field=zclass.getDeclaredField("fristName");
field.setAccessible(true);
field.set(person,"Ada");

String firstName=(String)field.get();

Since the field is private, it is necessary to mark it as accessible (field.setAccessible(true); ) in order to be able to set the field’s value.

Consider the following worker class–which performs some arbitrary job:

public class Worker  implements Runnable
{
    private String outputPath;
    private String extension;
    private String prefix;

   ///rest of class definition omitted.
}

and that requires a configuration file (i.e. a properties file):

outputPath=c:/tmp/
extension=txt
prefix=worker_output

While this configuration is small and trivial, often a worker class may require extensive configuration loading, which can be tedious–more so if you have many such classes with which to contend. With reflection, the job of loading configuration can be delegated to a single class:

public final class PropertyLoader
{
    private PropertyLoader(){}
 
    public static void load(Object object, Properties properties)
    {
        Class<?> zclass=object.getClass();
        try
        {
            for(Entry<Object,Object> e : properties.entrySet())
            {
                String name=(String)e.getKey();
                Object value=e.getValue();
 
                Field field=zclass.getDeclaredField(name);
                field.setAccessible(true);
                field.set(object, value);
            }
        }
        catch(IllegalAccessException | NoSuchFieldException e)
        {
            throw new RuntimeException(e.getMessage(), e);
        }

    }
}

That is all it takes.  Now loading the class configuration is simple(r):

Properties props = new Properties();
props.load(reader);
PropertyLoader.load(worker, props);
 
System.out.println(worker);

The output:

Worker{outputPath=c:/tmp/, extension=txt, prefix=worker_output}

These examples only work with String data types. Handling other (at least the basic) data types requires further considerations–which I will demonstrate in later posts.

Finally, I shall demonstrate a another point. Given that the method getDeclaredFields() does not get inherited methods, attempting to fetch a class’ inherited fields by name becomes a problem. If you have such a need (though often rare for a worker type class), you can write a method that traverses the class hierarchy–looking for the named field. Consider the following classes:

public class Person
{
    protected String firstName;
    protected String middleName;
    protected String lastName;
    protected Date dateOfBirth;
 
   //rest of class definition omitted
}
 
public class Employee extends Person
{
    private int id;
    private String title;
    private double salary;
 
     //rest of class definition omitted
}

 

Now, consider the following code fragment:

Field field= Employee.class.getDeclaredField(“firstName”);

This produces an exception java.lang.NoSuchFieldException: firstName. The reason is–of course–that ‘firstName’ is an inherited member. Consider, on the other hand, a method that crawls up the hierarchy until it finds the named field–or ultimately fails:

public Field getField(Class<?> zclass,String name) throws NoSuchFieldException
{
  try
  {
     Field field=zclass.getDeclaredField(name);
     return field;
  }
  catch(NoSuchFieldException e)
  {
     if(zclass==Object.class)
     {
        String string="Field '" + name + "' was not found";
         throw new NoSuchFieldException(string);
     }
     return getField(zclass.getSuperclass(), name);
  }
  }

Consider now the following fragment:

String[] names={"firstName","middleName","lastName","title"};
String[] values={"Augusta","Ada","Byron","<code>Countess of Lovelace</code>"};
Employee instance = new Employee();
Class clazz=instance.getClass();
for(int i= ; i<names.length ; i++)
{
      Field field=getField(clazz,names[i]);
      field.setAccessible(true);
      field.set(instance, values[i]);
}
System.out.println(instance);

 

The output is:

Employee{id=0, firstName=Augusta, middleName=Ada, lastName=Byron, dateOfBirth=null, title=Countess of Lovelace, salary=0.0}

As you can see, there is nothing to fear with reflection–at least with the basics.  Stay tuned for more posts on the topic–Enjoy!