Irrationality at the service of rationality in decision-making

The shared decision, or group decision, is one of the fundamental stones of modern western civilization, democracy itself is based on this concept. But how far do groups make the best decision? How much of the expected goal – as a group decision – is obtained from a project approval committee or team meeting to validate the schedule?

Arthur's Round Table

In Arthur’s reign, decisions were shared by peers

Legend has it that King Arthur had a round table for meetings, the Round Table, where all were equal and therefore were equally involved in the decisions taken there. In the company where we work, we like the idea of participating in relevant events, including decision making, this is the natural desire for democracy, even if a company is not a democratic institution.

Group decision-making in companies is an administrative theory that aims at several advantages such as increased workplace satisfaction, motivation as a result of participation, and which ultimately results in a positive organizational atmosphere as well as increased productivity. The popular concept that “many heads think better than one” can materialize in decisions with levels of quality superior to those individual decisions. We just do not know what to do with the perception that team decision-making can be unproductive and frustrating for wasting time – and money.

Max Gehringer, business administrator and writer, author of the book Classics of the Corporate World, says in his article The Seven Rules of the “Never” Manual for Projects: “Never try to convince if you can command.”

The Milgram's Experience

Milgram’s Experiment: E=Experimenter, S=Subject, A=Actor

There is a macabre aspect of group decision-making and it is inherent to the main actor on stage: the human being. In 1964, an experiment led by psychologist Stanley Milgram placed a group of volunteers in a room on the other side of which, separated by a thin wall, one of the randomly selected volunteers was positioned sitting and tied to a chair. With every question the researcher made to the man obtaining a wrong answer, the group would press a button that would inflict an electric shock on the volunteer. At first the shocks were light and fast, but over time they became strong and intense – and so did the shouts of the volunteer. Finally, after a long shock, the man was silent, and a bad atmosphere took over the place. Of course this was an experiment, and the man, a real actor, was never inflicted any shock. What was important in the research was the finding that the group lost sense of responsibility – in this case, for the punishment – since the decision was made by the group rather than by the individual. A control test showed that being an individual decision, the participants did not inflict this level of electric shock to the poor man.

Free Vermeulen

Freek Vermeulen, author of Business Exposed

The London Business School professor and author of the book Business Exposed, Freek Vermeulen, talks about people’s inclination to imitate others’ behavior. In fact, this is perhaps one of our first impulses, and the one responsible for our learning to speak, to walk and … to make decisions. It is easy to imagine a decision-making meeting where the first one who ventures into giving an opinion is followed by the others, while another group simply omits. Moreover, the omission is explained by Freek as an inhibition and not as a sign of agreement; “those who are silent, consent”. The fact is that nobody likes to be the minority, either when cheering for a football team or during a major decision in the company. “The consequence of this is that, in a meeting, it may happen that everyone is diverging, but no one speaks up because they are reluctant,” says the professor.

So what do we do with the shared decision, now that we have demonized it? Sometimes it is necessary to tear down walls to build foundations. So let’s learn how to extract from shared decision your best and in the best possible way, in order to ensure that in a project approval committee, or in the team meeting for validation – and of course approval – we achieve superior commitment and maturity.

Norman Maier, an American experimental psychologist, advocates for the method of group decision-making in the practice of effective command. In his studies, Maier takes into account two dimensions. The first is the internal social weight, and the second is the weight in organizational efficiency. By creating a matrix of these two dimensions, we can fit each decision into a quartile. Maier concluded that it is possible to significantly reduce the bias of shared decision by electing a “professional integrator.”

maier_matrix

Maier’s imaginary matrix for decision model

This figure is responsible for keeping a high level of discussion, valuing the sphere of information and skilfully leading to the choice of a solution. The professional integrator takes place in critical decisions, where the room for maneuver is reduced, and the group can not decide beyond certain limits. There is one more aspect in this form of decision making, to get a decision that is not a decision. The integrator manipulates the group to a more or less predetermined decision, giving word to the right person, suspending the meeting, or declaring the meeting finished at the appropriate time. Skill is essential to accomplish all of this without the members having a perception of manipulation while getting the best of the process of resistance to change, the “acceptance of the consequences of the decision.”

Manhattan Connection guys

Manhattan Connection program commentators. Lucas Mendes (center) is an example of a Maier’s professional integrator

The model suggested by Maier reminds me a lot of the Manhattan Connection program (despite its name, a brazilian show), shown on the Globo News paid channel. On air for 19 years, political, economic and cultural commentators debate on several highlights of the week. A table of discussion sometimes chaotic, but that is expertly coordinated by journalist Lucas Mendes, who plays the role of professional animator, bringing the subject and directing the discussion until reaching (almost always) the result. Result which you can imagine beforehand to be the one expected.

Planning for shared decision-making, and carrying out the process in a less orthodox way, can make it easy to implement a good decision-making system. The effort expended on the activity is rewarded with motivation, participation and commitment of those involved in the practical implementation of the decision.

Advertisement

Coding SVN hook in Java

In the How IT of today we create a SVN pre-commit’ hook coded in Java language 🙂

captain.hook

Why we need this

From small teams to really big teams counting more than 100 heads, the source control is almost mandatory. Apache Subversion (SVN) is one of most popular solution for software versioning and release control. Despite that as standard installation the application has no way to validate anything committed by users, SVN is very flexible to accept externals applications which are referred as hooks. Each hook will be executed in a specific step of the process; start-commit, pre-commit, pre-lock, pre-unlock, post-commit, etc.

SVN hooks can be used for almost everything, but commonly they do notification, validation, or replication over each operation. So, this could be the source of power over all bad artefacts submitted by users and prevent the lack of documentation in your application source code.

What we need

Development

  • JDK 1.7+
  • Log4J 1.2.16
  • SVNKit 1.8.12
  • Maven 3.0.5+

Execution – server

  • VisualSVN Server 3.7.1 (including SVN 1.9.7)
  • Microsoft Windows (just because of shell scripts)

Execution – client

  • TortoiseSVN 1.9.7+ (or your favourite SVN clients flavour)

How to

Let’s create a hypothetical scenario where we need to validate the name of files submitted by the team. Also, we need to be sure that no unacceptable word is going to be used in our artefacts contents. In this case, we want to stop the commit before the bad file turns part of the repository.

To achieve this goal we must create two hooks (could be only one, but for educational purposes will be two), which will be executed in pre-commit step. therefore, in case of break of rules, the commit will not be concluded.

A pattern to make it easy

First things first, we need to prepare the following dependencies in pom.xml:

<!-- Log4J -->
<dependency>
  <groupId>log4j</groupId>
  <artifactId>log4j</artifactId>
  <version>1.2.16</version>
</dependency>
<!-- SVN -->
<dependency>
  <groupId>org.tmatesoft.svnkit</groupId>
  <artifactId>svnkit</artifactId>
  <version>1.8.12</version>
</dependency>

The first validation will check if the file name contains only letters and number. It’s not a big challenge, right? Indeed! To do this task we coded the file TickingClockHook.java:

@Override
public void validate(String[] args) throws PreCommitException
{
	//Isolate the file name
	String fileName = args[4];
	log.debug("File name: " + fileName);
	if (fileName.contains("/"))
	{
		fileName = fileName.substring((fileName.lastIndexOf("/") + 1), fileName.length());
	}
	fileName = fileName.substring(0, fileName.indexOf("."));
	
	//Pattern to accept only letters and numbers
	Pattern pattern = Pattern.compile("^[a-zA-Z0-9]+$");
	
	if (!pattern.matcher(fileName).matches())
	{
		throw new PreCommitException("The file name must use only letters and numbers; " + fileName + " is not valid.");
	}
}

That was easy! As you can see, nothing in this method is exotic or hard to understand. Please, take a look at:

  • Line 51: we are reading the file name in method’s argument
  • Lines 53-57: isolate the file name from path and extension
  • Line 60: the regular expression to accept only letters and numbers is created
  • Line 62: verify if the file name matches to the pattern
  • Line 64: when the file name contains any non-alphabetic characters or numerics, the validation fails

Ok, but… is it works? Sure! See this output when trying to commit the file test-123.java:

Error: Commit failed (details follow):
Error: Commit blocked by pre-commit hook (exit code 1) with output:
Error: ==============================================================
Error:
Error: Your commit is blocked since the file name is not accepted.
Error: File: trunk/test-123.java
Error: Reason: The file name must use only letters and numbers; test-123 is not
Error: valid.
Error: You should fix it to try again.
Error:
Error: ==============================================================
Error: If you want to break the lock, use the 'Check For Modifications' dialog or the repository browser.

What about the validation of contents? Let’s see what is coded in BadLanguageHook.java:

@Override
public void validate(String[] args) throws PreCommitException
{
	String contents = loadFileContents(args[1], getUsername(), getPassword(), args[2], args[4]).toUpperCase();
	
	for (String word : bannedWords)
	{
		log.debug("Looking for: " + word);
		if (contents.contains((word.toUpperCase())))
		{
			throw new PreCommitException("Is not acceptable the use of " + word + " in contents of files.");
		}
	}
}

Less lines and nothing difficult:

  • Line 54: the file contents are read
  • Line 56: check for each banned word on the list
  • Line 59: look in contents for the forbidden word
  • Line 61: if the word is there, the validation fails

When we try to commit a dirty file:

Error: Commit failed (details follow):
Error: Commit blocked by pre-commit hook (exit code 1) with output:
Error: ==============================================================
Error:
Error: Your commit is blocked since something in file contents is not
Error: allowed.
Error: File: trunk/JollyRoger.java
Error: Reason: Is not acceptable the use of peter pan in contents of files.
Error: You should fix it to try again.
Error:
Error: ==============================================================
Error: If you want to break the lock, use the 'Check For Modifications' dialog or the repository browser.

After to behold the magic, we can check inside of the top hat.

I created a set of three classes as a pattern to make easy the creation of any hook of pre-commit to SVN.

The complete source code, compiling and shinning is available in my GitHub.

Let’s see the first and “core” of this pattern, the PreCommitInterdiction.java class:

public final static void main(String args[])
{
  //Checking for valid entries
  if (!hasValidArguments(args))
  {
    defineAsInternalError("This execution is invalid. Please, check for pre-commit script and useful messages in your log file " + LOG4J_LOG_FILE);
  }
  else
  {
    //Instantiate the hook
    PreCommitHook hook = createHook(args[0]);
    
    //Executes the validation using the hook
    try
    {
      if (hook != null)
      {
        hook.validate(args);
        printSuccess();
      }
    }
    catch (PreCommitException ex)
    {
      log.error("Could not validate because a pre-commit exception happened: " + ex.getMessage(), ex);
      printFail(ex.getMessage());
    }
    catch (Exception ex)
    {
      log.error("Could not validate because a general exception happened: " + ex.getMessage(), ex);
      defineAsInternalError(ex.getMessage());
    }
  }
  log.debug("...end!");
}

The method main is responsible to validate its arguments and call for the hook:

  • Line 146: check if calling arguments are valid
  • Line 153: instantiate the implementation of pre-commit hook
  • Line 160: execute the validation implemented by the hook
  • Line 167: when the validation fails, a message is printed explaining it
  • Line 172: if an unexpected error occurs, a message explaining it is printed

As you may notice, the highlighted line 160 is the point of interest in this map. The program calls the method validate of hook’s class, it comes from the interface PreCommitHook.java:

public interface PreCommitHook
{
  /**
   * Performs the validation of commit.
   * 
   * @param args Array of parameters to execute the hook.
   * @throws PreCommitException Happens when any or all implemented rules are broken.
   */
  public void validate(String[] args) throws PreCommitException;
}

This interface has to be implemented by all pre-commit hook classes and has only one method to override, it is validate.

The third class is our specific implementation of Exception, the PreCommitException.java:

public class PreCommitException extends Exception
{
	/**
	 * New friendly pre-commit exception.
	 * 
	 * @param message Explanation of the exception
	 */
  public PreCommitException(String message)
  {
    super(message);
  }
}

There is nothing special in this exception besides it is being used to thrown specific issues in the validation process.

Back to hook which is responsible to validate the file contents, in line 54 of file BadLanguageHook.java, what is the method loadFileContents? It comes from its superclass. All pre-commit hook have to extend PreCommitInterdiction, so this method is available. Take a look:

protected String loadFileContents(String repositoryPath, String username, String password, String transaction, String filePath) throws PreCommitException
{
  File repo = null;
  SVNLookClient svnLook = null;
  String contents = null;
  try
  {
    repo = new File(repositoryPath);
    ISVNAuthenticationManager authenticationManager = BasicAuthenticationManager.newInstance(username, password.toCharArray());
    ISVNOptions svnOptions = new DefaultSVNOptions();
    svnLook = new SVNLookClient(authenticationManager, svnOptions);
  }
  catch (Exception ex)
  {
  log.error("Fail while connecting to SVN using \"" + username +  ": " + ex.getMessage(), ex);
  throw new PreCommitException("Unable to connect to SVN: " + ex.getMessage());
  }
  try
  {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    svnLook.doCat(repo, filePath, transaction, baos);
    contents = baos.toString();
  }
  catch (SVNException ex)
  {
    log.error("Fail while loading contents of \"" + filePath + "\": " + ex.getMessage(), ex);
    throw new PreCommitException("Unable to load contents of submitted file: " + ex.getMessage());
  }
  if (contents == null)
  {
    throw new PreCommitException("Contents loaded from file is invalid.");
  }
  
  return contents;
}
  • Line 286-289: connect to SVN repository
  • Line 298-300: read the contents of submitted file

If the instance is unable to connect to SVN or the file contents is unreachable or invalid, a PreCommitException is thrown.

At this point, we have walked for all Java implementation. The pattern is flexible and functional in any Operating System, I guess (I have not tested others than Windows). The source code is available on my GitHub as well entire solution including configurations files and scripts.

However, the Java application (hook) will not work without a little help. The SVN server doesn’t execute applications itself but shell scripts. Therefore, when using Windows as host of the SVN server (not as client) you need to write a couple of .bat or .cmd files to call your hooks.

To accomplish our current scenario are needed three scripts:

  • pre-commit.bat: responsible to call other scripts, in fact, everything could be written here
  • TickingClockHook.bat: call the hook responsible to validate filenames
  • BadLanguageHook.bat: call the hook responsible to validate each file’s contents

These scripts will work only in a Windows host. If you are running your SVN Server on Linux, you need to write bash scripts to do what they do above -it is not a big deal.

Finally, there are two files used to configure the execution:

  • log4j.xml: configuration of log messages
  • svn.properties: authentication parameters used by the hook to connect to SVN

Here we are! I hope this pattern help you to make things a little more organized in your projects.

Download the complete and functional project from my GitHub.


Creating rules dynamically with Drools

This “how to” walks through a solution to dynamically create and execute business rules.

rules

Why we need this

It’s common the necessity to implement some kind of rules to choose between one and other action. Often, these rules are expressed in a sort of business thinking. Drools is a Business Rules Management System maintained by Red Hat and JBoss, and the answer when we need a robust, flexible and open source solution!

What we need

  • JDK 1.7+
  • Drools 6.5+
  • Maven 3.0.5+

How to

First of all, is needed to understand what rules mean in this context. Each rule in Drools works like a if statement in a programming language, thus basically we have two parts; conditional and action. This is written by someone who understood the business rule and adapted it in the schema of Drools Language Rule:

rule "Apply 10% off over all items of $4 or more"
when
    Product(price >= 4.0)
then
    product.discount(10);
end

Pretty easy, right?!

Line 1 says the friendly name of the business rule.
Line 3 is the condition to satisfy. In this case, the price of product has to be equal or more than 4 bucks .
Line 5 shows the action to be executed, so apply 10% discount over product’s price.

We can write how many rules are needed and all of them are saved in a file with extension .drl. But, as a file it is static and the goal here is to create something dynamic, therefore writing files is not an option!

Hands-on mode

Let’s think about the supermarket where we work. It’s time for promotions and the manager asked for applying a set of discounts on some products.

  • Apply 10% off over all items of $4 or more
  • For all items with due date on next 7 days, apply 45%
  • Give 5% off over every kind of Beans

There we go…

Before starting code, just make sure to have the following dependencies in your project file pom.xml:

<dependency>
  <groupId>org.drools</groupId>
  <artifactId>drools-compiler</artifactId>
  <version>6.5.0.Final</version>
</dependency>
<dependency>
  <groupId>org.drools</groupId>
  <artifactId>drools-core</artifactId>
  <version>6.5.0.Final</version>
</dependency>
<dependency>
<groupId>org.drools</groupId>
  <artifactId>drools-decisiontables</artifactId>
  <version>6.5.0.Final</version>
</dependency>

Next approach is to use Rule Templates, where we can write the pattern of our rules, and populate it after with concrete rules. Following is what we need to accomplish our mission, the file Product.drl:

template header

name
object
conditional
action

package drools.templates;

global net.itfromhell.howit.dummy.Product product;

import java.text.SimpleDateFormat;
import function net.itfromhell.howit.dynamicdrools.util.DroolsUtility.debug;

dialect "java"

template "Product"

rule "@{row.rowNumber} - @{name}"
when
  @{object}(@{conditional})
then
  product.discount(@{action});
  debug(drools);
end

end template

The structure used to create a template file is similar to regular rule file. Please, take a look on:

  • Line 1: describes this file as a template
  • Line 3-6: parameters expected in this template
  • Line 15: declares the dialect used as java
  • Line 19-25: rule defination
  • Line 27: end of template

Now we have a template to handle all rules about the products in promotion, the next step is to write a couple of rules describing how to apply the discount on them. We will do it programmatically, using a set of classes that I coded to make the dirty job:

  • Rule.class: encapsulate all that is needed to create one single business rule.
  • Condition.class: describe one specific condition to be evaluated.
  • DroolsUtility.class: tool to operate everything in memory.

The complete source code, compiling and ready for duty is available in my GitHub.

In my opinion, you wanna see everything working, who knows if it isn’t a joke… in this case, run ShowMeTheDrools.class:

public class ShowMeTheDrools
{
 public static void main(String args[]) throws Exception
 {
  //List to keep all rules
  List<Rule> rules = new ArrayList<Rule>();
  //Load each business rule
  rules.add(createDiscountOverpriced());
  rules.add(createDiscountSoonDueDate());
  rules.add(createDiscountBeans());

  //Create a session to operate Drools in memory
  DroolsUtility utility = new DroolsUtility();
  StatelessKieSession session = utility.loadSession(rules, "drools/templates/Product.drl");

  //Define the products to be processed using our rules
  Product blackBeans = new Product("Black Beans", 2.20, "30/12/2017");
  Product cannelliniBeans = new Product("Cannellini Beans", 4.15, "05/02/2018");
  Product kidneyBeans = new Product("Kidney Beans", 2.05, "20/11/2017");
  Product rice = new Product("Rice", 1.10, "28/10/2017");
  Product milk = new Product("Milk", 3.50, "10/11/2017");

  /*
  Now, the magic happens!
  For each product to be processed, we have to face it over rules to get, or not, a discounted price.
  */
  System.out.println("Applying over " + rice.getName() + " with price $" + rice.getPrice() + "...");
  session.setGlobal("product", rice);
  session.execute(rice);
  System.out.println("...price after review: $" + rice.getPrice());

  System.out.println("Applying over " + blackBeans.getName() + " with price $" + blackBeans.getPrice() + "...");
  session.setGlobal("product", blackBeans);
  session.execute(blackBeans);
  System.out.println("...price after review: $" + blackBeans.getPrice());

  System.out.println("Applying over " + milk.getName() + " with price $" + milk.getPrice() + "...");
  session.setGlobal("product", milk);
  session.execute(milk);
  System.out.println("...price after review: $" + milk.getPrice());

  System.out.println("Applying over " + kidneyBeans.getName() + " with price $" + kidneyBeans.getPrice() + "...");
  session.setGlobal("product", kidneyBeans);
  session.execute(kidneyBeans);
  System.out.println("...price after review: $" + kidneyBeans.getPrice());

  System.out.println("Applying over " + cannelliniBeans.getName() + " with price $" + cannelliniBeans.getPrice() + "...");
  session.setGlobal("product", cannelliniBeans);
  session.execute(cannelliniBeans);
  System.out.println("...price after review: $" + cannelliniBeans.getPrice());
 }

As result we got:

Applying over Rice with price $1.1...
Triggered rule: 1 - Apply discount on all soon due date
...price after review: $0.605

Applying over Black Beans with price $2.2…
Triggered rule: 2 – Discounting on all beans
…price after review: $2.0900000000000003

Applying over Milk with price $3.5…
…price after review: $3.5

Applying over Kidney Beans with price $2.05…
Triggered rule: 2 – Discounting on all beans
…price after review: $1.9474999999999998

Applying over Cannellini Beans with price $4.15…
Triggered rule: 2 – Discounting on all beans
Triggered rule: 0 – Give some discount on overpriced
…price after review: $3.5482500000000003

In order to verifiy the promotional price, we noted a discount on Rice which started for $1.1 and now is $0.605 (what bargain! don’t forget to check the due date). We know why, it’ s because the rule named “Apply discount on all soon due date” was triggered applying its huge discount.

Black Beans started from $2.2 and finished to $2.09, 5% less because of rule “Discounting on all beans”.

But wait, we got no discount on Milk since the start and final prices are the same 3.5 buks! The simple answer is just because no rule was triggered (you can check it).

Something different happened to Cannellini Beans, there are two rules triggered, is it right? Yes it is. The first rule about Beans is triggered applying 5%, next the rule about overprice is triggered applying more 10%, so the final price is now $3.54. It is intersting since we are able to chain rules, increasing the complexity of our solution.

Back to source code of ShowMeTheDrools on lines:

  • 25-27: all three business rules are created (next section we see details)
  • 31: our tool is used to get a session to operate rules
  • 34-38: a set of products is defined
  • 44-67: we submit each product against all business rules and watch what happens on price

To make it easy to understand, I created a specific method for each business rule. In the real world it doesn’t make any sense. Anyway, let’s see how it works for asked business rule “Apply 10% off over all items of $4 or more”:

private static Rule createDiscountOverpriced()
{
 //First of all, we create a rule giving it a friendly name
 Rule rule = new Rule("Give some discount on overpriced");
 //Here we need to say what kind of object will be processed
 rule.setDataObject(Product.class.getName());

 //As expected, a rule needs condition to exists. So, let's create it...
 Condition condition = new Condition();
 //What data, or property, will be checked
 condition.setProperty("price");
 //What kind of check to do
 condition.setOperator(Condition.Operator.GREATER_THAN_OR_EQUAL_TO);
 //What is the value to check
 condition.setValue(new Double(4.0));

 //Next, is needed to set rule's condition
 rule.setCondition(condition);
 //Finally, this is what will be done when ours condition is satisfied
 rule.setAction("10");

 return rule;
}

What about “For all items with due date on next 7 days, apply 45%”:

private static Rule createDiscountSoonDueDate() throws Exception
{
 Rule rule = new Rule("Apply discount on all soon due date");
 rule.setDataObject(Product.class.getName());

 //Is possible to create multiple conditions, therefore, data range or more complex situations could be expressed
 Condition greaterThan = new Condition();
 greaterThan.setProperty("dueDate");
 greaterThan.setOperator(Condition.Operator.GREATER_THAN);
 greaterThan.setValue((new SimpleDateFormat("dd/MM/yyyy").parse("23/10/2017")));

 Condition lessThan = new Condition();
 lessThan.setProperty("dueDate");
 lessThan.setOperator(Condition.Operator.LESS_THAN);
 lessThan.setValue((new SimpleDateFormat("dd/MM/yyyy").parse("30/10/2017")));

 //You can define as many as necessary conditions to achieve your necessity
 rule.setConditions(Arrays.asList(greaterThan, lessThan));
 rule.setAction("45");

 return rule;
}

Finally we have the third business rule “Give 5% off over all Beans”:

private static Rule createDiscountBeans()
{
 Rule rule = new Rule("Discounting on all beans");
 rule.setDataObject(Product.class.getName());

 //This is the simplest way to define the rule' condition
 rule.addCondition("name", Condition.Operator.CONTAINS, "Beans");  

 rule.setAction("5");

 return rule;
}

Let’s take a look over most important pieces of code of each class, starting from beggining; Rule.class

As we discussed before, the engine understand Drools Rule Language, the method conditionAsDRL transforms all conditions of the rule into textual expressions.

 public String conditionAsDRL() throws IllegalStateException, IllegalArgumentException
 {
  if ((conditions == null) || (conditions.isEmpty()))
  {
   throw new IllegalStateException("You must declare at least one condition to be evaluated.");
  }

  StringBuilder drl = new StringBuilder();
  //For each condition of this rule, we create its textual representation
  for (int i = 0; i < conditions.size(); i++)
  {
   Condition condition = conditions.get(i);
   drl.append("(");
   drl.append(condition.buildExpression());
   drl.append(")");
   if ((i + 1) < conditions.size())
   {
    drl.append(" && ");
   }
  }

  return drl.toString();
 }

The class Condition.class has buildExpression, executed by Rule and responsible to return its own expression of the conditional.

public String buildExpression() throws IllegalArgumentException
{
 StringBuilder drl = new StringBuilder();

 if (value instanceof String)
 {
  drl.append(expressionForStringValue());
 }
 else if (value instanceof Number)
 {
  drl.append(expressionForNumberValue());
 }
 else if (value instanceof Date)
 {
  drl.append(expressionForDateValue());
 }
 else
 {
  throw new IllegalArgumentException("The class " + value.getClass().getSimpleName() + " of value is not acceptable.");
 }

 return drl.toString();
}

For each type of value, there is a specific method responsible to create the specialized expression, as exemple of expressionForNumberValue, used to transform a Number instance:

private String expressionForNumberValue() throws IllegalArgumentException
{
 StringBuilder drl = new StringBuilder();

 if ((operator.isComparable(Short.class)) || (operator.isComparable(Integer.class)) || (operator.isComparable(Long.class))
     || (operator.isComparable(Double.class)) || (operator.isComparable(Float.class)))
{
  drl.append(property).append(" ").append(operator.getOperation()).append(" ").append(value);
 }
 else
 {
  throw new IllegalArgumentException("Is not possible to use the operator " + operator.getDescription() + " to a " + value.getClass().getSimpleName() + " object.");
 }

 return drl.toString();
}

Last but not less important, the class DroolsUtility.class has two methods to see:

/**
 * Loads a session to execute rules in memory using a template file.
 *
 * @param templatePath Relative path to template file describing the rule's pattern.
 * @param rulesAsParameters List of maps representing each rule as a set of parameters.
 * @return Session for execution of rules.
 * @throws Exception
 */
private StatelessKieSession loadSession(String templatePath, List<Map<String, Object>> rulesAsParameters) throws Exception
{
  ObjectDataCompiler compiler = new ObjectDataCompiler();
  //Compiles the list of rules using the template to create a readable Drools Rules Language
  String drl = compiler.compile(rulesAsParameters, Thread.currentThread().getContextClassLoader().getResourceAsStream(templatePath));

  System.out.println("drl:\n" + drl);

  KieServices services = KieServices.Factory.get();
  KieFileSystem system = services.newKieFileSystem();
  system.write("src/main/resources/drools/templates/rule.drl", drl);
  services.newKieBuilder(system).buildAll();

  KieContainer container = services.newKieContainer(services.getRepository().getDefaultReleaseId());
  StatelessKieSession session = container.getKieBase().newStatelessKieSession();

  return session;
}

The highlight line 56 is a trick, it will print in console the concrete generated .drl‘s contents, which is used by the engine in memory. See below:

package drools.templates;
global net.itfromhell.howit.dummy.Product product;
import java.text.SimpleDateFormat;
import function net.itfromhell.howit.dynamicdrools.util.DroolsUtility.debug;
dialect "java"

rule "2 - Discounting on all beans"
when
  net.itfromhell.howit.dummy.Product((name.toUpperCase().contains("BEANS")))
then
  product.discount(5);
  debug(drools);
end

rule "1 - Apply discount on all soon due date"
when
  net.itfromhell.howit.dummy.Product((dueDate > (new SimpleDateFormat("dd/MM/yyyy HH:mm:ss")).parse("23/10/2017 00:00:00")) && (dueDate < (new SimpleDateFormat("dd/MM/yyyy HH:mm:ss")).parse("30/10/2017 00:00:00")))
then
  product.discount(45);
  debug(drools);
end

rule "0 - Give some discount on overpriced"
when
  net.itfromhell.howit.dummy.Product((price >= 4.0))
then
  product.discount(10);
  debug(drools);
end

As suggests its name, this method can be used to get some details of the triggered rule when the engine acts.

/**
 * Debug tool to show what is happening over each triggered execution.
* Name of rule trigger as well the object inspected are printed.
 *
 * @param helper Injected when a consequence is fired.
 */
public static void debug(final KnowledgeHelper helper)
{
 System.out.println("Triggered rule: " + helper.getRule().getName());
 if (helper.getMatch() != null && helper.getMatch().getObjects() != null)
 {
  for (Object object : helper.getMatch().getObjects())
  {
   System.out.println("Data object: " + object);
  }
 }
}

That’s all folks!

Remember it: living on your own rules 🙂

Download the complete project from my GitHub.


The danger of persistence of decision-making in project management

Recently I’ve been talking to a colleague, during a happy hour, about good projects that lead to bad results. Despite the fact that in happy hour meetings one isn’t supposed to talk about work, the conversation brought some good reflections.

Gize Piramides

If the pyramids were software, the ones in the front would be the beta version

Since the time I attended high school in data processing – long seventeen years ago – I already looked at the numbers regarding information-system projects as a laughingstock. I remember one of my professors at that time comparing software engineering with civil engineering, invariably praising the management and construction capability of civil engineering in relation to software engineering. Anyway, civil engineers started building the pyramids five thousand years ago, we barely started punching cards a bit over fifty years ago. Historically, they are way ahead of us – I used to think.

Returning to the subject, this colleague used to talk about a project of which he was the manager, and which potentially represented an important evolution in the Company, but that at last was becoming a millionaire failure. My insane curiosity did not cease to be teased in order to understand, after all, why there was such a discrepancy between what was planned and what was achieved.

That project, with a budget of almost one and a half million reais, met four of the five classic reasons for initiation: strategic need, market demand, client request and technological advance. Not only fit to meet the legal requirements. Avoiding further discussion on so many other details, especially for ethical reasons, I can say that the project that was the subject of our discussion was interesting and promising. But then, what failed?

Chaos Report 2009

Chaos Report 2009

According to the 2009 Chaos Report published by Standish Group, 32% of IT projects were successful – within the deadline, budget, scope, and quality (!). In a comparison with the previous report published in 2006, the percentage fell by three points, when it was 35%. Even so, the scenario improved a lot, because at the time I was in high school, the 1998 report showed a 26% success rate.

We can certainly point out many reasons why so many projects have failed (if we built bridges, imagine that 68% of them fell). Here we will just focus on my colleague’s project. Of all this sad story, what caught my attention, and that could even be left in the background, was the direct intervention of the IT Director during the procedures for the selection of suppliers. After defining the necessary acquisitions, and passing through the ritual of proposal requisitions, the project team classified the suppliers and identified the one that best met the requirements defined at the beginning of the process. Although it is questionable, the final decision as to which supplier would be contracted was the Director’s, and he decided for the supplier with the lowest cost and biggest risk (according to the team reports). This is where my particular analysis begins, not to question the decision tree of my colleague’s company – who said it is questionable? – but to draw a parallel to the interesting psychological theory of the “freezing effect”, grounded by Kurt Lewin after World War II.

Kurt Lewin

Kurt Lewin has grounded the Three Step Theory

After the Director’s decision to hire the least suitable supplier, according to the project team, the software development work began. It was not long before problems of quality and timing began, and at first they were bypassed within project management. With time and the persistence of occurrences, timeline milestones remained disregarded, and reports were issued with alternative solutions and their projected scenarios defined. Finally, the management team directly called on the Director to make a decision regarding the supplier. In the first foray, the decision was for the maintenance of the supplier, and a discreet request for the latter’s attention. There were still other three or four rounds, where the same process of identification, creation of options, and decision making was repeated, always with the same result; the maintenance of the supplier. Or would it be the maintenance of the decision?

People tend to adhere to their own decisions, promoting the maintenance of the commitment to the first decision, in an escalation of decisions. This helps to explain why the Director kept his decision for that supplier, even though he had access to reports that advised against it. The decision for the supplier selection was deliberately his. This first decision is the foundation of the “Escalation of Commitment” effect (Barry M. Staw, 1976). According to the psychological theory, the initial decision freezes the system of possible choices of the individual, causing him to focus on committing himself to the behavior that makes more sense to his decision. After all, this is the common and expected behavior of people, because in an organized society what would happen if everyone started behaving one way after having committed to another? Therefore, this effect is a basic block in the construction of personality.

Just as the Freeze Effect process explains the Director’s attitude in maintaining his decision, it also explains why the project management team, after analyzing the supplier’s performance, was able to seek solutions and even to suggest the replacement of the supplier: the original decision was not theirs. Without commitment to the decision, the team was free to use rationally the information obtained. It is especially interesting to understand that this psychological phenomenon exists and how it works so that we can manage it and seek ways to prevent its perverse effects by exploring and maximizing its virtues.

Of course my colleague’s episode is not an exception, as the Chaos Report figures prove. With even more certainty, not all projects have problems with suppliers. Information system projects have so many points of attention that an IT project manager should be promoted to a god (a bit of chauvinism). What is important to extract from this tale is the lesson.

As the freezing effect comprises the Escalation of Commitment, two other concepts are also related: the Irreversible Expense and the Trap.

I can imagine that at a certain moment, while presenting themselves to the Director, the project management team suggested replacing the supplier. The Director, on the other hand, must have argued – with more or less emotion – that the project had already spent financial resources with said supplier and therefore would not be prudent to replace it. This is an example of the judgment by Irreversible Expense, the phenomenon that occurs when the individual persists in the decision because he had previously invested money, time, effort, in detriment of other decisions, which had potentially more advantages.

It is worth remembering that the hypothetical manual of good project management says that in deciding for the continuity of the project, efforts which have already been made must not be considered (these costs are sunk in the accounting norm). Therefore, a good project management, in itself, should be enough to prevent the Irreversible Expense phenomenon from contaminating the decision making.

And as far as timeline milestones are concerned, isn’t it one of their functions to serve as a checkpoint to help control the project? Yes, but, perhaps they were not used satisfactorily. The last phenomenon, the Trap, can help the understanding of how to improve the use of those milestones. Who knows, maybe in my colleague’s project, this concept could have helped correct the course, and ensured that the end result was the one intended.

Prompt to continue with the project

What now?

An experiment conducted in 1979 by Joel Brockner, Myril C. Shaw and Jeffrey Z. Rubin, showed that players who were exposed to the opportunity to decide whether to continue or stop betting, lost less when they were forced to decide to continue. On the other hand, those who, on the same occasion, would only act if they decided to stop, lost more.

The timeline milestone generally works as the first case of the cited experiment. There is a milestone, where the opportunity for decision occurs. If a strong decision is not taken, the project will continue automatically. Like the bettor who loses more, the project remains consuming more resources, even if it is out of control. The lesson learned comes in here, as it would be enough changing the way the timeline milestone works so that the initial decision Trap is controlled. Thus, at each milestone of the project, if there is no decision, the project is then discontinued. Radical? Perhaps. Now, think about the resources that would be saved in exchange for more aggressive management.

In this conversation with my colleague, and consequently in the reflection of the case he presented, I did not expect to discover the philosophical stone of project management. But I was glad that we understood human psychology a little more, and how it guides us in a subtle and transparent way.


Search for a specific text in Oracle database

set serveroutput on size 100000;
declare
–Text to find, case insensitive. Use % for like.
V_TEXT varchar(50) := '%what I want to locate%';
–Tables schema
V_SCHEMA varchar(50) := 'SCHEMA_NAME';
–List of tables to skip; comma-separated or NONE
V_SKIP_NAMES varchar(200) := 'NONE';
–Using temporary table
V_TEMPORARY_TABLE number := 0;
—————————–
V_TOTAL_TABLES number := 0;
V_TOTAL_COLUMNS number := 0;
V_TOTAL_RESULT number := 0;
V_TIMER number;
V_QUERY varchar(4000);
–Select all schema's tables
cursor
V_TABLE_LIST
is
select
OBJECT_NAME
from
ALL_OBJECTS
where
(OBJECT_TYPE = 'TABLE')
and (OWNER = V_SCHEMA)
–Eventual exclusions
and (OBJECT_NAME not in (V_SKIP_NAMES))
–Debug purposes
–and (OBJECT_NAME in ('GM_SFL_SERVICE_FEE_DETAILS'))
order by
OBJECT_NAME
;
begin
V_TIMER := dbms_utility.get_time();
begin
if (V_TEMPORARY_TABLE = 1) then
execute immediate 'create global temporary table TMP_LOOKUP_TEXT (SCHEMA_NAME varchar(50), TABLE_NAME varchar2(50), COLUMN_NAME varchar2(50), COLUMN_VALUE varchar2(1000)) on commit preserve rows';
end if;
exception when others then
dbms_output.put_line('Fail to create temporary table: ' || sqlerrm);
end;
–For each table
for V_TABLE_ITEM in V_TABLE_LIST loop
V_TOTAL_TABLES := V_TOTAL_TABLES + 1;
–dbms_output.put_line('Checking table: ' || V_TABLE_ITEM.OBJECT_NAME);
declare
–Select all table's columns
cursor
V_COLUMN_LIST
is
select
OWNER as SCHEMA_NAME
, TABLE_NAME
, COLUMN_NAME
, DATA_TYPE
from
SYS.ALL_TAB_COLUMNS
where
(OWNER = V_SCHEMA)
and (TABLE_NAME = V_TABLE_ITEM.OBJECT_NAME)
and (COLUMN_NAME not in (V_SKIP_NAMES))
and (DATA_TYPE in ('NVARCHAR2', 'CHAR', 'VARCHAR2'))
order by
COLUMN_NAME
;
begin
–For each column
for V_COLUMN_ITEM in V_COLUMN_LIST loop
V_TOTAL_COLUMNS := V_TOTAL_COLUMNS + 1;
–dbms_output.put_line('Checking column: ' || V_COLUMN_ITEM.COLUMN_NAME);
V_QUERY := 'select ' || V_COLUMN_ITEM.COLUMN_NAME || ' from ' || V_COLUMN_ITEM.SCHEMA_NAME || '.' || V_TABLE_ITEM.OBJECT_NAME || ' where upper(' || V_COLUMN_ITEM.COLUMN_NAME || ') like upper(' || chr(39) || V_TEXT || chr(39) || ')';
–dbms_output.put_line('Query: ' || V_QUERY);
declare
type V_MATCH_RECORD is record
(
V_VALUE varchar2(10000)
);
type V_MATCH_TABLE is table of V_MATCH_RECORD;
type V_MATCH_CURSOR_TYPE is ref cursor;
V_MATCH_LIST V_MATCH_TABLE;
V_MATCH_SELECT V_MATCH_CURSOR_TYPE;
V_INDEX binary_integer;
begin
–Select columns where the value matches
open V_MATCH_SELECT for V_QUERY;
fetch V_MATCH_SELECT bulk collect into V_MATCH_LIST;
close V_MATCH_SELECT;
–Found?
if (V_MATCH_LIST.count > 0) then
V_INDEX := V_MATCH_LIST.first;
while V_INDEX is not null loop
V_TOTAL_RESULT := V_TOTAL_RESULT + 1;
if (V_TEMPORARY_TABLE = 0) then
dbms_output.put_line(V_COLUMN_ITEM.SCHEMA_NAME || '.' || V_COLUMN_ITEM.TABLE_NAME || '.' || V_COLUMN_ITEM.COLUMN_NAME || ' = ' || substr(V_MATCH_LIST(V_INDEX).V_VALUE, 1, 50));
else
execute immediate 'insert into TMP_LOOKUP_TEXT (SCHEMA_NAME, TABLE_NAME, COLUMN_NAME, COLUMN_VALUE) values (V_COLUMN_ITEM.SCHEMA_NAME, V_COLUMN_ITEM.TABLE_NAME, V_COLUMN_ITEM.COLUMN_NAME, V_MATCH_LIST(V_INDEX).V_VALUE)';
end if;
V_INDEX := V_MATCH_LIST.next(V_INDEX);
end loop;
end if;
exception when others then
dbms_output.put_line('Fail to report match: ' || sqlerrm);
end;
end loop;
end;
end loop;
V_TIMER := dbms_utility.get_time() – V_TIMER;
dbms_output.put_line('Read ' || V_TOTAL_COLUMNS || ' columns from ' || V_TOTAL_TABLES || ' tables in ' || (V_TIMER / 1000) || ' seconds. Found ' || V_TOTAL_RESULT || ' match to ' || chr(39) || V_TEXT || chr(39) || ' in schema ' || V_SCHEMA || '.');
begin
if (V_TEMPORARY_TABLE = 1) then
execute immediate 'select * from TMP_LOOKUP_TEXT order by TABLE_NAME, COLUMN_NAME';
end if;
exception when others then
dbms_output.put_line('Fail to list result in temporary table: ' || sqlerrm);
end;
end;
view raw gistfile1.txt hosted with ❤ by GitHub

Listing relationship constraints in SQL Server

DECLARE @tableName VARCHAR(64);
SET @tableName = 'MY_TABLE_NAME';
SELECT
SO_P.name AS parent_table
, SC_P.name AS parent_column
, 'is a foreign key of' AS direction
, SO_R.name AS referenced_table
, SC_R.name AS referenced_column
, *
FROM
sys.foreign_key_columns FKC
INNER JOIN sys.objects SO_P ON SO_P.object_id = FKC.parent_object_id
INNER JOIN sys.columns SC_P ON (SC_P.object_id = FKC.parent_object_id) AND (SC_P.column_id = FKC.parent_column_id)
INNER JOIN sys.objects SO_R ON SO_R.object_id = FKC.referenced_object_id
INNER JOIN sys.columns SC_R ON (SC_R.object_id = FKC.referenced_object_id) AND (SC_R.column_id = FKC.referenced_column_id)
WHERE
((SO_P.name = @tableName) AND (SO_P.type = 'U'))
OR
((UPPER(SO_R.name) = UPPER(@tableName)) AND (SO_R.type = 'U'))
;

PostgreSQL script to look up for a text

DO $$ <<lookup_contents>>
DECLARE
_search VARCHAR := 'something'; what are you looking for?
_limit INTEGER := 1000000;
_total_columns INTEGER := 0;
_total_read INTEGER := 0;
_mark INTEGER;
_text_columns REFCURSOR;
_text_column RECORD;
_address_columns REFCURSOR;
_address_column RECORD;
BEGIN
RAISE NOTICE 'Look up contents for %', _search;
For each column in the database
OPEN _text_columns FOR
SELECT
t.table_schema
, t.table_name
, c.column_name
, CONCAT('SELECT ', c.column_name, ' AS data_found FROM ', t.table_schema, '.', t.table_name, ' WHERE (UPPER(', c.column_name,') LIKE UPPER(''%', _search, '%'')) ORDER BY ', c.column_name,';') AS "selection_query"
FROM
information_schema.tables t
INNER JOIN
information_schema.columns c
ON c.table_name = t.table_name
AND c.table_schema = t.table_schema
WHERE
(t.table_schema = 'public') schema
AND (t.table_type = 'BASE TABLE') tables
AND ((c.character_maximum_length IS NOT NULL) AND (c.character_maximum_length > 1)) more than 1 character
AND (c.udt_name IN ('bpchar', 'char', 'varchar', 'text')) only texts
ORDER BY
t.table_schema, t.table_name
;
LOOP
FETCH _text_columns INTO _text_column;
EXIT WHEN NOT FOUND;
IF _total_read >= _limit THEN
RAISE NOTICE 'Reached the limit of % reads', _limit;
EXIT;
END IF;
_total_columns := _total_columns + 1;
SELECT INTO _mark MOD(_total_columns, 10);
IF _mark = 0 THEN
RAISE NOTICE 'Processed % columns so far…', _total_columns;
END IF;
RAISE NOTICE 'Executing: %', _text_column.selection_query;
OPEN _address_columns FOR EXECUTE _text_column.selection_query;
LOOP
FETCH _address_columns INTO _address_column;
EXIT WHEN NOT FOUND;
_total_read := _total_read + 1;
RAISE NOTICE 'Found: %.%.% = %', _text_column.table_schema, _text_column.table_name, _text_column.column_name, SUBSTRING(_address_column.data_found, 1, 100);
IF _total_read >= _limit THEN
RAISE NOTICE 'Reached the limit of % reads', _limit;
EXIT;
END IF;
END LOOP;
CLOSE _address_columns;
END LOOP;
CLOSE _text_columns;
RAISE NOTICE 'Concluding % columns reads between % records', _total_columns, _total_read;
END
lookup_contents $$;

SQL Server script to scan the whole database looking for a specific value in a specific column


DECLARE @search_column VARCHAR(255);
DECLARE @search_value VARCHAR(255);
DECLARE @search_type INT; /* 1: specific column 2: string columns 3: numeric columns */
SET @search_column = 'SEARCHING_COLUMN_NAME'; wrap with %% for wider search
SET @search_value = 'Any value'; wrap with %% for wider search
SET @search_type = 1;
DECLARE @containing_tables TABLE
(
table_name VARCHAR(255) NOT NULL,
column_name VARCHAR(255) NOT NULL,
column_value VARCHAR(255) NOT NULL
);
DECLARE all_tables_cursor CURSOR FAST_FORWARD for
SELECT
t.name AS 'TableName', c.name AS 'ColumnName'
FROM
sys.columns c JOIN sys.tables t ON c.object_id = t.object_id
WHERE
((@search_type = 1) AND (UPPER(c.name) LIKE UPPER(@search_column))) /* search into a specific column */
OR
((@search_type = 2) AND (c.user_type_id IN (35, 99, 167, 175, 231, 239))) /* search into all string columns */
OR
((@search_type = 3) AND (c.user_type_id IN (48,52,56,108,127))) /* search into all numeric columns */
ORDER BY TableName, ColumnName;
DECLARE @table_name VARCHAR(255), @column_name VARCHAR(255)
OPEN all_tables_cursor
FETCH NEXT FROM all_tables_cursor INTO @table_name, @column_name
WHILE (@@FETCH_STATUS <> 1)
BEGIN
IF (@@FETCH_STATUS <> 2)
BEGIN
BEGIN TRY
DECLARE @select_statement VARCHAR(1000);
SET @select_statement = 'DECLARE all_values_cursor CURSOR FAST_FORWARD FOR SELECT [' + @column_name + '] FROM [' + @table_name + '] WHERE (CONVERT(VARCHAR(255), [' + @column_name + ']) LIKE ' + CHAR(39) + @search_value + CHAR(39) + ');';
EXEC(@select_statement)
DECLARE @column_value VARCHAR(255)
OPEN all_values_cursor
FETCH NEXT FROM all_values_cursor INTO @column_value
WHILE (@@FETCH_STATUS <> 1)
BEGIN
IF (@@FETCH_STATUS <> 2)
BEGIN
INSERT INTO @containing_tables (table_name, column_name, column_value) VALUES (@table_name, @column_name, @column_value);
END
FETCH NEXT FROM all_values_cursor INTO @column_value
END
CLOSE all_values_cursor;
DEALLOCATE all_values_cursor;
END TRY
BEGIN CATCH
PRINT 'Error reading ' + @table_name + '.' + @column_name + ': ' + ERROR_MESSAGE();
END CATCH;
END
FETCH NEXT FROM all_tables_cursor INTO @table_name, @column_name
END
CLOSE all_tables_cursor;
DEALLOCATE all_tables_cursor;
Bonus instruction, you can customize the print below to accomodate specific needs
DECLARE all_selects_cursor CURSOR FAST_FORWARD for
SELECT
table_name, column_name, column_value
FROM
@containing_tables
GROUP BY table_name, column_name, column_value;
OPEN all_selects_cursor
FETCH NEXT FROM all_selects_cursor INTO @table_name, @column_name, @column_value
WHILE (@@FETCH_STATUS <> 1)
BEGIN
IF (@@FETCH_STATUS <> 2)
BEGIN
BEGIN TRY
PRINT 'SELECT * FROM ' + @table_name + ' WHERE (' + @column_name + ' = ' + CHAR(39) + @search_value + CHAR(39) + ');';
END TRY
BEGIN CATCH
PRINT 'Error reading ' + @table_name + '.' + @column_name + ': ' + ERROR_MESSAGE();
END CATCH;
END
FETCH NEXT FROM all_selects_cursor INTO @table_name, @column_name, @column_value
END
CLOSE all_selects_cursor;
DEALLOCATE all_selects_cursor;
SELECT table_name, column_name, column_value, count(column_value) AS repeating FROM @containing_tables GROUP BY table_name, column_name, column_value;

view raw

SQLServer.sql

hosted with ❤ by GitHub


Importing a public SSL certificate into a JVM

You are trying to connect your pretty application to an external HTTPS endpoint and getting an odd out of blue exception like below:

PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

That’s happening because, when connecting via HTTPS, the public SSL certificate from the destiny must be available to JVM truststore. Sometimes it is not updated. So, let’s do it with two simple command lines.

1. Have the public certificate in your machine:

Linux

openssl s_client -connect <HOST>:443 -servername <HOST>:443 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > public.crt

Windows

openssl s_client -connect <HOST>:443 -servername <HOST>:443 < NUL | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > public.crt

2. Import the public certificate:

<JDK_HOME>\bin\keytool -import -alias <HOST> -keystore <JRE_HOME>\lib\security\cacerts -file public.crt

All good!


SQL Server script to replicate all data from a remote database to a local database


/*
Replicates all data from the remote database to the local database. The local contents are removed, all current local data is erased and not recoverable.
You have to set the value for @LINKED_SERVER_NAME as your local Linked Server name for the remote SQL Server instance.
*/
print 'Starting replication…';
Control for structure alterations over the process
declare @CONTROL_TABLES table
(
TABLE_NAME varchar(255) NOT NULL,
CONSTRAINT_NAME varchar(255) NULL,
STATUS_ENABLED int NULL,
HAS_IDENTITY int NULL
);
Local linked server alias
declare @LINKED_SERVER_NAME varchar(255);
Database name (both "from" and "to" must be same name)
declare @DATABASE_COPY varchar(255);
/**
Define controllers
**/
print 'Setting controllers…';
set @LINKED_SERVER_NAME = '127.0.0.1,11433';
Using local database name
select @DATABASE_COPY = db_name();
Identify all user tables with constraints
insert into @CONTROL_TABLES (TABLE_NAME, CONSTRAINT_NAME, STATUS_ENABLED, HAS_IDENTITY)
select
O2.NAME,
O.NAME,
case when ((C.STATUS & 0x4000)) = 0 then 1 else 0 end,
null
from SYS.SYSCONSTRAINTS C
inner join SYS.SYSOBJECTS O on O.ID = C.CONSTID
inner join SYS.SYSOBJECTS O2 on O2.ID = O.PARENT_OBJ
where
(O2.NAME in (select TABLE_NAME from INFORMATION_SCHEMA.TABLES where (TABLE_TYPE = 'BASE TABLE')))
and (O.XTYPE in ('C', 'F'))
;
Identify all user tables with identity
update A
set
HAS_IDENTITY = 1
from
@CONTROL_TABLES as A
inner join (select
T.NAME as NAME
from
SYS.SCHEMAS as S
inner join SYS.TABLES as T on S.SCHEMA_ID = T.SCHEMA_ID
where exists
(
select 1 from SYS.IDENTITY_COLUMNS
where OBJECT_ID = T.OBJECT_ID
)) as B on A.TABLE_NAME = B.NAME
;
insert into @CONTROL_TABLES (TABLE_NAME, HAS_IDENTITY)
select
T.NAME,
1
from
SYS.SCHEMAS as S
inner join SYS.TABLES as T on S.SCHEMA_ID = T.SCHEMA_ID
where
exists (select 1 from SYS.IDENTITY_COLUMNS where OBJECT_ID = T.OBJECT_ID)
and (T.NAME not in (select TABLE_NAME from @CONTROL_TABLES))
;
/**
For each table, disable its constraints
**/
print 'Disabling constraints…';
declare NOCHECK_STATEMENTS_CURSOR cursor FAST_FORWARD for
select
distinct 'alter table ' + TABLE_NAME + ' nocheck constraint ' + CONSTRAINT_NAME + ';'
from
@CONTROL_TABLES
where
(STATUS_ENABLED = 1);
declare @NOCHECK_DISABLE_STATEMENT varchar(255)
open NOCHECK_STATEMENTS_CURSOR
fetch next from NOCHECK_STATEMENTS_CURSOR into @NOCHECK_DISABLE_STATEMENT
while (@@fetch_status <> 1)
begin
if (@@fetch_status <> 2)
begin
print @NOCHECK_DISABLE_STATEMENT
exec(@NOCHECK_DISABLE_STATEMENT)
end
fetch next from NOCHECK_STATEMENTS_CURSOR into @NOCHECK_DISABLE_STATEMENT
end
deallocate NOCHECK_STATEMENTS_CURSOR;
/**
For each table, disable its triggers
**/
print 'Disabling triggers…';
declare DISABLE_STATEMENTS_CURSOR cursor fast_forward for
select
'disable trigger ' + SYSOBJECTS.NAME + ' on ' + object_name(PARENT_OBJ) + ';'
from
SYSOBJECTS
inner join SYS.TABLES T on SYSOBJECTS.PARENT_OBJ = T.OBJECT_ID
inner join SYS.SCHEMAS S on T.SCHEMA_ID = S.SCHEMA_ID
where
(SYSOBJECTS.TYPE = 'TR');
declare @DISABLE_STATEMENT varchar(255)
open DISABLE_STATEMENTS_CURSOR
fetch next from DISABLE_STATEMENTS_CURSOR into @DISABLE_STATEMENT
while (@@fetch_status <> 1)
begin
if (@@fetch_status <> 2)
begin
print @DISABLE_STATEMENT
exec(@DISABLE_STATEMENT)
end
fetch next from DISABLE_STATEMENTS_CURSOR into @DISABLE_STATEMENT
end
deallocate DISABLE_STATEMENTS_CURSOR;
/**
Remove all local contents
**/
print 'Removing local contents…';
declare DELETE_STATEMENTS_CURSOR cursor fast_forward for
select
'delete from ' + TABLE_NAME + ';'
from
INFORMATION_SCHEMA.TABLES
where
(TABLE_TYPE = 'BASE TABLE')
order by TABLE_NAME
;
declare @DELETE_STATEMENT varchar(255)
open DELETE_STATEMENTS_CURSOR
fetch next from DELETE_STATEMENTS_CURSOR into @DELETE_STATEMENT
while (@@fetch_status <> 1)
begin
if (@@fetch_status <> 2)
begin
print @DELETE_STATEMENT
exec(@DELETE_STATEMENT)
end
fetch next from DELETE_STATEMENTS_CURSOR into @DELETE_STATEMENT
end
deallocate DELETE_STATEMENTS_CURSOR;
/**
Copy data from remote server to local environment
**/
print 'Copying data from remote server…';
All non identity tables
declare COPY_STATEMENTS_CURSOR cursor fast_forward for
select
'insert into ' + TABLE_NAME + ' select * from "' + @LINKED_SERVER_NAME + '".' + @DATABASE_COPY + '.dbo.' + TABLE_NAME + ';'
from
INFORMATION_SCHEMA.TABLES
where
(TABLE_TYPE = 'BASE TABLE')
and (TABLE_NAME not in (select TABLE_NAME from @CONTROL_TABLES where (HAS_IDENTITY = 1)))
order by TABLE_NAME
;
declare @COPY_STATEMENT varchar(max);
open COPY_STATEMENTS_CURSOR
fetch next from COPY_STATEMENTS_CURSOR into @COPY_STATEMENT
while (@@fetch_status <> 1)
begin
if (@@fetch_status <> 2)
begin
print @COPY_STATEMENT
exec(@COPY_STATEMENT)
end
fetch next from COPY_STATEMENTS_CURSOR into @COPY_STATEMENT
end
deallocate COPY_STATEMENTS_CURSOR;
All identity tables
declare @TABLE_NAME varchar(255);
declare @COLUMNS_NAME varchar(max);
declare COPY_IDENTITY_STATEMENTS_CURSOR cursor fast_forward for
select
distinct(TABLE_NAME)
from
@CONTROL_TABLES
where
(HAS_IDENTITY = 1)
order by TABLE_NAME
;
declare @COPY_IDENTITY_STATEMENT varchar(max);
open COPY_IDENTITY_STATEMENTS_CURSOR
fetch next from COPY_IDENTITY_STATEMENTS_CURSOR into @TABLE_NAME
while (@@fetch_status <> 1)
begin
if (@@fetch_status <> 2)
begin
set @COLUMNS_NAME = null;
select @COLUMNS_NAME = coalesce(@COLUMNS_NAME + ',', '') + COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where (TABLE_NAME = @TABLE_NAME);
set @COPY_IDENTITY_STATEMENT = 'set identity_insert ' + @TABLE_NAME + ' on;';
set @COPY_IDENTITY_STATEMENT = @COPY_IDENTITY_STATEMENT + 'insert into ' + @TABLE_NAME + ' (' + @COLUMNS_NAME + ') select ' + @COLUMNS_NAME + ' from "' + @LINKED_SERVER_NAME + '".' + @DATABASE_COPY + '.dbo.' + @TABLE_NAME + ';'
set @COPY_IDENTITY_STATEMENT = @COPY_IDENTITY_STATEMENT + 'set identity_insert ' + @TABLE_NAME + ' off;';
print @COPY_IDENTITY_STATEMENT;
exec(@COPY_IDENTITY_STATEMENT);
end
fetch next from COPY_IDENTITY_STATEMENTS_CURSOR into @TABLE_NAME
end
deallocate COPY_IDENTITY_STATEMENTS_CURSOR;
/**
For each table, enable its triggers
**/
print 'Enabling triggers…';
declare ENABLE_STATEMENTS_CURSOR cursor fast_forward for
select
'enable trigger ' + SYSOBJECTS.NAME + ' on ' + object_name(PARENT_OBJ) + ';'
from
SYSOBJECTS
inner join SYS.TABLES T on SYSOBJECTS.PARENT_OBJ = T.OBJECT_ID
inner join SYS.SCHEMAS S on T.SCHEMA_ID = S.SCHEMA_ID
where
(SYSOBJECTS.TYPE = 'TR');
declare @ENABLE_STATEMENT varchar(255)
open ENABLE_STATEMENTS_CURSOR
fetch next from ENABLE_STATEMENTS_CURSOR into @ENABLE_STATEMENT
while (@@fetch_status <> 1)
begin
if (@@fetch_status <> 2)
begin
print @ENABLE_STATEMENT
exec(@ENABLE_STATEMENT)
end
fetch next from ENABLE_STATEMENTS_CURSOR into @ENABLE_STATEMENT
end
deallocate ENABLE_STATEMENTS_CURSOR;
/**
For each table, enable its constraints
**/
print 'Enabling constraints…';
declare CHECK_STATEMENTS_CURSOR cursor fast_forward for
select
'alter table ' + TABLE_NAME + ' with check check constraint ' + CONSTRAINT_NAME + ';'
from
@CONTROL_TABLES
where
(STATUS_ENABLED = 1)
;
declare @CHECK_STATEMENT varchar(max);
open CHECK_STATEMENTS_CURSOR
fetch next from CHECK_STATEMENTS_CURSOR into @CHECK_STATEMENT
while (@@fetch_status <> 1)
begin
if (@@fetch_status <> 2)
begin
print @CHECK_STATEMENT;
exec(@CHECK_STATEMENT);
end
fetch next from CHECK_STATEMENTS_CURSOR into @CHECK_STATEMENT
end
deallocate CHECK_STATEMENTS_CURSOR;
print '…all good!';


Redirect Tomcat console messages to file

Despite the fact that offical Tomcat’s Wiki says:

System.out and System.err are both redirected to CATALINA_BASE/logs/catalina.out when using Tomcat’s startup scripts (bin/startup.sh/.bat or bin/catalina.sh/.bat). Any code that writes to System.out or System.err will end up writing to that file. (link)

For some reason that scape from my understanding, this is not 100% true. Many dirty System.out don’t go to catalina file, been readable only from console.

So, the solution to guarantee that all console messages became available in a file (usually for troubleshooting purposes) is:

  1. Locate the file startup.sh or startup.bat
  2. Look into for call “%EXECUTABLE%” start %CMD_LINE_ARGS%
  3. Replace the command above for call “%EXECUTABLE%” run >..\logs\console.log 2>&1 start %CMD_LINE_ARGS% run >..\logs\console.log 2>&1

 


Spring Framework marks transaction to rollback when everything is fine

Eventually, you can’t conclude the whole process flux because, at the end, the Spring Framework marks the transaction to rollback, even when all exceptions were caught.

The evidence of this scenario is this following message Transaction was marked for rollback only; cannot commit. The exception thrown is org.springframework.orm.jpa.JpaSystemException.

It happens because in some point of all code processed, your flow had an exception unchecked. The simplest solution is to use checked exceptions and avoid use NoResultException unless it is really necessary.