Quantcast
Channel: general – jPOS.org
Viewing all 39 articles
Browse latest View live

TransactionManager’s new property: call-selector-on-abort

$
0
0

TransactionManager is one of the most important features of jPOS. It is used to implement transaction flows using TransactionParticipants. Usually the participants are organized in groups and GroupSelector interface is used to implement decision points in the transaction flow. TransactionManager calls each participant in the order they appear in the deployment configuration xml file. When it encounters a participant that implements GroupSelector interface, its select() method is called which should return the names of the groups to be called next. TransactionManager then recursively calls participants in selected group(s).

This calling of participants is done twice. First, the prepare() method is called during the PREPARE phase. If all the participants are PREPARED, the commit() method of the participants is called in the COMMIT phase. A participant can abstain from being called during COMMIT by returning PREPARED | NO_JOIN from its prepare() method. For GroupSelectors, the prepare() method is called first and then the select() method is called.

If any of the participant returns ABORTED from its prepare() method, the transaction flow changes. From this point onwards, the prepareForAbort() method is called for those remaining participants that implement the AbortParticipant interface.

Interestingly, if the participant that first aborts or any further participants happen to implement GroupSelector then their select() method is called based on the new call-selector-on-abort boolean property of the TransactionManager. Before this property was introduced, the TransactionManager would always call the select() method of such GroupSelectors. Therefore the default value of call-selector-for-abort property is true to ensure backward compatibility.

Lets see an example. Suppose we have a TransactionManager configuration like below:

<txnmgr class="org.jpos.transaction.TransactionManager" logger="Q2">
  <property name="queue" value="myTxnQueue" />
  <property name="sessions" value="2" />
  <property name="max-active-sessions" value="15" />
  <property name="debug" value="true" />
  ....
  <participant class="com.my.company.Participant1" logger="Q2" />
  <participant class="com.my.company.Participant2" logger="Q2" />

  <participant class="com.my.company.Selector1" logger="Q2">
     <property name="12" value="group1 group2"/>
     <property name="34" value="group3 group4"/>
  </participant>

 <group name="group1">
   <participant class="com.my.company.Participant11" logger="Q2" />
   <participant class="com.my.company.Participant12" logger="Q2" />
   <participant class="com.my.company.Selector2" logger="Q2">
     <property name="ab" value="groupA"/>
     <property name="cd" value="groupC groupD"/>
   </participant>
 </group>

 <group name="groupA">
   <participant class="com.my.company.Participant11A" logger="Q2" />
   <participant class="com.my.company.Participant11B" logger="Q2" />
 </group>

 <participant class="com.my.company.Participant3" logger="Q2" />
 ....
</txnmgr>

Let us assume that:

  • Selector1 and Selector2 implement GroupSelector interface
  • Selector1 will return “12″
  • Selector2 will return “ab”
  • Participant12, Participant11A, Participant3 implement AbortParticipant interface
  • Participant1′s prepare method returns PREPARED | NO_JOIN
  • Participant2′s prepare method returns PREPARED
  • Selector1′s prepare() method returns ABORTED

Now, since the call-selector-on-abort parameter is not defined, it defaults to true and the transaction will be processed by the TransactionManager as below:

        prepare: com.my.company.Participant1 NO_JOIN
        prepare: com.my.company.Participant2
        prepare: com.my.company.Selector1 ABORTED
       selector: com.my.company.Selector1 group1 group2
prepareForAbort: com.my.company.Participant12
       selector: com.my.company.Selector2 groupA groupB
prepareForAbort: com.my.company.Participant11A
prepareForAbort: com.my.company.Participant3
          abort: com.my.company.Participant2
          abort: com.my.company.Selector1
          abort: com.my.company.Participant12
          abort: com.my.company.Participant11A
          abort: com.my.company.Participant3
   ....

Now if we set the call-selector-on-abort property to false

<txnmgr class="org.jpos.transaction.TransactionManager" logger="Q2">
  <property name="queue" value="myTxnQueue" />
  <property name="sessions" value="2" />
  <property name="max-active-sessions" value="15" />
  <property name="debug" value="true" />
  <property name="call-selector-on-abort" value="false" />
  ....

With that the TransactionManager would behave something like this:

        prepare: com.my.company.Participant1 NO_JOIN
        prepare: com.my.company.Participant2
        prepare: com.my.company.Selector1 ABORTED
prepareForAbort: com.my.company.Participant3
          abort: com.my.company.Participant2
          abort: com.my.company.Selector1
          abort: com.my.company.Participant3
   ....

As one can see, call-selector-for-abort property significantly affects the transaction flow when the transaction aborts. If no participant aborts, this property does not come into picture at all.


Six years under the AGPL

$
0
0

/by @apr thinking out loud/

We moved jPOS to the AGPL license about six years ago, in hindsight I’d like to share my thoughts about the move.

  • The AGPL is a good license, perfect for our project, if people were to read it.
  • It is based on the honor system, but nobody cares about honor these days unless honor is enforced someway or another.
  • My perception is that for a large number of developers, OpenSource is the same as Free Software, Apache license is the same as GPL, LGPL, AGPL, MPL, Potatoes, Potatos, same thing.

We used to sell commercial licenses under the jPOS PEP which is a combination of license+coaching/hand-holding. Participants get it for the hand-holding part, they rarely care to get a signed agreement, we need to push to get them signed. [UPDATE: we are not accepting new PEP members as of August/2010] We have some true license purchases, those come either from large companies with large legal teams reviewing every license or from companies being purchased/merged doing due-diligence procedures

The downside of a license like this is IMHO:

  • People not willing to release their code under a compatible license (that’s probably 100% of our users) and not willing to purchase a commercial license (that’s about 99.96% according to our guestimates) feel guilty and go dark, never participate, never contribute code, and limit themselves to ask questions under public e-mail addresses with lots of numbers in it. They know they are free riders and nobody is proud of that, so they hide.
  • Some open source power users and projects know the license is somehow restrictive, so if they can, they avoid it.

So my belief now is that the AGPL just slows down a project like ours. It’s probably perfect for a larger organization with the ability to go to the street and enforce it, but this is not our case, we don’t have the resources nor the willing to do so. That said, it’s still the best fit for us, so we’ll stick to it for the time being.

SystemMonitor scripts

$
0
0

We recently added a new small but useful feature to the SystemMonitor, the ability to run external scripts.

Here is an example:

<sysmon logger="Q2">
 <attr name="sleepTime" type="java.lang.Long">3600000</attr>
 <attr name="detailRequired" type="java.lang.Boolean">true</attr>
 <property name="script" value="uname -a" />
 <property name="script" value="id" />
 <property name="script" value="pwd" />
 <property name="script" value="uptime" />
 <property name="script" value="vm_stat" />
 <property name="script" value="jps -l" />
</sysmon>

output looks like this:


<log realm="org.jpos.q2.qbean.SystemMonitor" at="Fri May 10 10:04:08 UYT 2013.700">
  <info>
               OS: Mac OS X
             host: alejandro-revillas-macbook-pro.local/192.168.1.110
          version: 1.9.1-SNAPSHOT (8a4a517)
         instance: bac8e399-ccf2-4778-97df-d37d704bb011
           uptime: 00:00:08.877
       processors: 8
           drift : 0
    memory(t/u/f): 989/177/812
          threads: 56
            Thread[Reference Handler,10,system]
            Thread[Finalizer,8,system]
            Thread[Signal Dispatcher,9,system]
            Thread[RMI TCP Accept-0,5,system]
            Thread[Keep-Alive-Timer,8,system]
            Thread[Q2-bac8e399-ccf2-4778-97df-d37d704bb011,5,main]
            Thread[DestroyJavaVM,5,main]
            Thread[Timer-0,5,main]
            Thread[pool-1-thread-1,5,main]
            Thread[DefaultQuartzScheduler_Worker-1,5,main]
            Thread[DefaultQuartzScheduler_Worker-2,5,main]
            Thread[DefaultQuartzScheduler_Worker-3,5,main]
            ...
            ...
            ...
            Thread[PooledThread-0,5,ThreadPool-0-1]
            Thread[PooledThread-1,5,ThreadPool-2-3]
    name-registrar:
      txnmgr: org.jpos.transaction.TransactionManager
      logger.: org.jpos.util.Logger
      tspace:default: org.jpos.space.TSpace
        <key count='1'>$TAILLOCK.1878750663</key>
        <keycount>0</keycount>
        <gcinfo>0,0</gcinfo>

      logger.Q2.buffered: org.jpos.util.BufferedLogListener
      server.jcard-server: org.jpos.iso.ISOServer
        connected=0, rx=0, tx=0, last=0
      ssm: org.jpos.security.jceadapter.SSM
      jcard-xml-server: org.jpos.q2.iso.QServer
      tspace:org.jpos.transaction.TransactionManager@6ffb75c7: org.jpos.space.TSpace
        <key count='1'>$HEAD</key>
        <key count='1'>$TAIL</key>
        <keycount>1</keycount>
        <gcinfo>0,0</gcinfo>

      capture-date: org.jpos.ee.CaptureDate
      server.jcard-xml-server: org.jpos.iso.ISOServer
        connected=0, rx=0, tx=0, last=0
      ks: org.jpos.security.SimpleKeyFile
      logger.Q2: org.jpos.util.Logger
      jcard-server: org.jpos.q2.iso.QServer
    uname -a:
      Darwin alejandro-revillas-macbook-pro.local 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64
    id:
      uid=502(apr) gid=20(staff) groups=20(staff),401(com.apple.access_screensharing),12(everyone),33(_appstore),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),100(_lpoperator),204(_developer)
    pwd:
      /Users/apr/git/jcardinternal/build/install/jcardinternal
    uptime:
      10:04  up 8 days, 14:45, 5 users, load averages: 1.26 1.17 1.35
    vm_stat:
      Mach Virtual Memory Statistics: (page size of 4096 bytes)
      Pages free:                           4494.
      Pages active:                       552609.
      Pages inactive:                     271878.
      Pages speculative:                     245.
      Pages wired down:                   216855.
      "Translation faults":            116355062.
      Pages copy-on-write:               5347593.
      Pages zero filled:                52951322.
      Pages reactivated:                 2098799.
      Pageins:                           3732245.
      Pageouts:                           467375.
      Object cache: 73 hits of 543605 lookups (0% hit rate)
    jps -l:
      14371 sun.tools.jps.Jps
      12512 org.gradle.launcher.daemon.bootstrap.GradleDaemon
      14365 jcardinternal-2.0.0-SNAPSHOT.jar
  </info>
</log>

You can have handy vmstat 1 30 as a default so that you can get an idea of the overall system performance when you get a high ‘drift’ (drift is the new name for the old ‘elapsed’ info).

jPOS JVM options

$
0
0

/by @apr/

jPOS applications have a very low memory footprint, so we usually are fine running a simple

java -server -jar jpos.jar

But for applications more memory intensive (due to caching), such as jCard, the stop-the-world Full GC becomes a problem, freezing the JVM for as much as 5 seconds, or more.

If your response time is usually good but from time to time you have an inexplicable spike, I suggest you add:

-Xloggc:log/gc.log

that will create a nice log file that you can tail -f to get some insight into the JVM GC whereabouts.

If you see too many Full GCs, it’s time to dig deeper. First thing to check is your -Xmx. If you set a -Xmx parameter which is high (say 1GB) but you don’t set -Xms with the same value, the JVM will try to bring down the memory in use, even if you are not reaching the max value, causing frequent Full GCs. You may want to try:

Xmx1G -Xms1G

and check how the gc.log goes.

If that solves the problem, leave the JVM default options alone, you’re done for now. If that doesn’t solve your problem I suggest you change your GC implementation. My choice is the Concurrent Mark Sweep GC implementation that you can enable using:

-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode

This will probably solve your problem, but while you are at it, I suggest some of these:

jPOS’ Q2 forces a full GC (calling System.gc()) when re-deploying jars in the deploy/lib directory. You could be using RMI that also initiates a distributed GC calling System.gc() too, or an operator could be fascinated by the GC button in his JMX console, so you want to prevent a Full GC in those situations, you do that using:

-XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses

If you care about processing transactions fast after a restar, you may set:

-XX:+TieredCompilation

and to get some latest JVM optizations:

-XX:+AggressiveOpts

The full list of my jCard JAVA_OPTS p0rn is:

java -server \
    -Dappname=jCard \
    -Dcom.sun.management.jmxremote \
    -Xloggc:log/gc.log \
    -Xmx1G -Xms1G \
    -XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses \
    -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode \
    -XX:+UseCMSInitiatingOccupancyOnly \
    -XX:+CMSClassUnloadingEnabled \
    -XX:+CMSScavengeBeforeRemark \
    -XX:+AggressiveOpts \
    -XX:+ParallelRefProcEnabled \
    -XX:+TieredCompilation \
    -jar jcardinternal-2.0.0-SNAPSHOT.jar "$@"

References: – Java PerformanceJava SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning

PADChannel delay on send

$
0
0

According to WikiPedia “TCP provides reliable, ordered, error-checked delivery of a stream of octets between programs running on computers connected to an intranet or the public Internet.”.

It provides a reliable delivery of a “stream” of octets, not a sequence of “packets”.

That’s the reason most ISO-8583 wire protocol implementations use some kind of message boundary delimiters, typically using a header with a length indicator or some markers such as ETX character.

So the PADChannel was a tough packager to write, and that’s the reason why ISOPackager has a stream based unpack method and all ISOFieldPackager implementations have stream based unpack operations, to read the message on-the-fly.

public void unpack (ISOComponent m, InputStream in);

When you send message over TCP/IP, you can’t guarantee that the message will be transmitted in a single IP packet and will be available entirely in a single socket ‘read’ operation on the receiving end. You just can’t. Here are some reasons why:

  • The TCP/IP stack on your machine may join together a couple messages in a single packet if it fits its MTU
  • Intermediate routers/firewalls may split your messages in smaller packets if their MTU is smaller (we found some very small MTUs in dedicated satellite links and GPRS networks).
  • You could be lucky and your message may fit a single TCP/IP packet all over the way, but for some reason, the receiving application could be busy or had a very short performance glitch, so when it arrives to the socket read operation, perhaps there are more than one message waiting. That’s the reason why you see the Recv-Q column in a ‘netstat’ command going up and down.
  • There could be some packet loss so the sender TCP/IP stack needs to resend a message and could assemble multiple packets in a single bigger one

The list goes on, I recommend you read Uyless Black’s TCP/IP books.

But with all that said, we regularly find people imagining that an ISO-8583 message transmitted on one end will arrive on a single socket read at the other end. Probably legacy implementations migrated from X.25 (where that was true) to TCP/IP without designing any message boundary delimiter strategy.

There’s no solution to that, but we found this issue impossible to explain to some network admins, so we provided a very limited work around that just works on highly reliable networks with low traffic and good performance on the receiving end, adding a small ‘delay’ after we send a message so that we can pray that the TCP/IP stack and intermediate network could hopefully deliver the message in a single packet.

So PADChannel now has a ‘delay’ property (expressed in millis). If you’re facing this problem, I’d use 100ms as a starting value.

Setting up the Client Simulator

$
0
0

There has been some interest in jPOS-Users mailing list regarding how to setup the Client Simulator module from jPOS-EE.

I’ll show you here how to use it using the jPOS-template.

Step 1: Install a fresh copy of the jPOS template

git clone https://github.com/jpos/jPOS-template.git or download a copy of it.

Step 2: rename the cloned or downloaded directory

Let’s call it ‘clientsimulator’. After renaming it, cd clientsimulator.

Step 3: edit build.gradle

Add a line inside the dependencies block

compile group:'org.jpos.ee', name:'jposee-client-simulator', version:'2.0.1-SNAPSHOT'

Step 4: call gradle installResources

Please note when I say gradle, you can either use your locally installed gradle, or the gradlew wrapper available in the jPOS template that you just downloaded.

This will copy some sample client simulator configuration from the client-simulator.jar to your local src/dist directory.

After running that, you’ll see a few new files in src/dist/deploy and src/dist/cfg, i.e:

  • 10_clientsimulator_channel.xml
  • 20_clientsimulator_mux.xml
  • 25_clientsimulator_ui.xml # remove it if you’re running headless
  • 30_clientsimulator.xml
  • echo_s and echo_r in the src/dist/cfg directory

Step 5: call gradle run

As an alternative, you can navigate to your build/install/clientsimulator directory and call bin/q2 (or bin\q2.bat if you’re on Windows).

As next step, you can edit your src/dist/deploy/10_clientsimulator.xml file and change it to use your selected packager.

Preliminary OSGi support

$
0
0

As of f19a445d we added support for OSGi.

The jar task produces the jPOS OSGi bundle (in build/libs directory)

The bundleFull task creates a bigger bundle that include jPOS dependencies in its lib directory under the name jpos-1.9.3-SNAPSHOT-bundle-full.jar (precompiled versions can be downloaded from jpos-1.9.3-SNAPSHOT.jar and jpos-1.9.3-SNAPSHOT-bundle-full.jar in the jPOS Bundles repository.

There’s a new qnode module that you can use to test the jPOS bundle (if you don’t have an OSGi container already installed). You can go to qnode directory (sibling with the jpos one), run gradle installApp and then copy your bundle to the ‘bundles’ directory.

Here is a full session:

cd jpos
gradle bundleFull

cd ../qnode
gradle installApp

cp ../jpos/build/libs/jpos-1.9.3-SNAPSHOT-bundle-full.jar build/install/qnode/bundle
build/install/qnode/bin/qnode

This should launch Q2 installed as a Bundle in an OSGi container (in this case, Apache Felix).

@apr

Eating our own dogfood

$
0
0

I really like the TransactionManager, it allows me to clearly define a transaction as if it was processed in an assembly line, with each small reusable participant doing its own little part, and the TransactionManager giving useful profiling information.

But now we live in a RESTful world, we need to implement RESTful based services here and there.

A typical REST call implementation looks like this:

@Path("/customers/{customer_id}/wallets/{wallet_id}/credit")
public class WalletCredit extends WalletCreditDebitSupport {
    @POST
    @Produces(MediaType.APPLICATION_JSON)
    @Consumes(MediaType.APPLICATION_FORM_URLENCODED)
    public Response credit (
            @Context UriInfo uriInfo,
            @PathParam("customer_id") String customerId,
            @PathParam("wallet_id") String walletId,
            @FormParam("rrn") String rrn,
            @FormParam("detail") String detail,
            @FormParam("amount") BigDecimal amount,
            @FormParam("currency") String currency)
            throws IOException, URISyntaxException, BLException, ISOException
    {
        …
        …
        …

    }
}

so the first reaction is to just start coding the business logic as part of the body of that method. You need to:

  • sanity check the parameters
  • open a database connection
  • create a TranLog record
  • Validate the Customer, Account, etc.
  • create a GL Transaction
  • do some audit logging
  • commit the JDBC transaction

This of course work fine, but hey, we already have code that does that:

  • We have a standard transaction Open participant
  • We usually have a CreateTranLog participant
  • We have a CheckCustomer participant and CheckAccount
  • We have participants that generate GLTransactions
  • And of course, the Close and Debug participants

Problem with a traditional implementation inside that body is that we need to reinvent the wheel, repeat code, add custom profiling or otherwise we wouldn’t know which operations are fast and which ones are slow.

The solution was to eat our own dog food and use the TransactionManager.

So in this jCard case I’ve created a rest_txnmgr.xml that looks like this:

<!DOCTYPE txnmgr [
    <!ENTITY PAN_PATTERN   "^[\d]{10}$">
    <!ENTITY AMOUNT_PATTERN  "[+-]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?">
    <!ENTITY CURRENCY_PATTERN "^\d{1,4}">
    <!ENTITY RRN_PATTERN "^\d{1,12}">
    <!ENTITY DETAIL_PATTERN "^.{0,255}$">
    <!ENTITY WALLET_PATTERN "^[\d]{1,64}$">
    <!ENTITY TEXT50_PATTERN "^[\w\s.\-\']{0,50}$">
    ...
    ...
]>

<txnmgr name='rest-txnmgr' class="org.jpos.transaction.TransactionManager" logger="Q2" realm='rest-txnmgr'>
  <property name="queue" value="JCARD.RESTAPI.TXN" />
  <property name="sessions" value="2" />
  <property name="max-sessions" value="64" />
  <property name="debug" value="true" />

  <participant class="org.jpos.jcard.PrepareContext" logger="Q2" realm="prepare-context" />
  <participant class="org.jpos.jcard.Switch" logger="Q2" realm="Switch">
    <property name="WalletCreate" value="walletcreate close trigger-response" />
    <property name="WalletCredit" value="walletcredit close trigger-response" />
    <property name="WalletDebit" value="walletdebit close trigger-response" />
    <property name="BalanceInquiry" value="balance-inquiry close trigger-response" />
    <property name="MiniStatement" value="mini-statement close trigger-response" />
    …
    ..

</participant>
…
…
     <group name="walletcreate">
   <participant class="org.jpos.jcard.rest.ValidateParams" logger="Q2">
     <mandatory>
       <param name="PAN">&PAN_PATTERN;</param>
       <param name="WALLET_NUMBER">&WALLET_PATTERN;</param>
     </mandatory>
     <optional>
       <!-- no optional fields -->
     </optional>
   </participant>
   <participant class="org.jpos.transaction.Open" logger="Q2" realm="open">
     <property name="checkpoint" value="open" />
     <property name="timeout" value="300" />
   </participant>
   <participant class="org.jpos.jcard.CreateTranLog" logger="Q2"
                realm="create-tranlog">
     <property name="capture-date" value="capture-date" />
     <property name="node"         value="01" />
   </participant>
   <participant class="org.jpos.jcard.CheckCustomer"
                logger="Q2" realm="checkcustomer">
   </participant>
   <participant class="org.jpos.jcard.rest.WalletCreateParticipant" logger="Q2" realm="create-wallet">
     <property name="chart"        value="jcard" />
     <property name="customers-account" value="21" />
     <property name="kid" value="&KID;" />
   </participant>
 </group>
 ...
 ...

</txnmgr>

Now the code looks like this:

@Path("/customers/{customer_id}/wallets/{wallet_id}/credit")
public class WalletCredit extends WalletCreditDebitSupport {
    @POST
    @Produces(MediaType.APPLICATION_JSON)
    @Consumes(MediaType.APPLICATION_FORM_URLENCODED)
    public Response credit (
            @Context UriInfo uriInfo,
            @PathParam("customer_id") String customerId,
            @PathParam("wallet_id") String walletId,
            @FormParam("rrn") String rrn,
            @FormParam("detail") String detail,
            @FormParam("amount") BigDecimal amount,
            @FormParam("currency") String currency)
            throws IOException, URISyntaxException, BLException, ISOException
    {
        return process(uriInfo, customerId, walletId, rrn, detail, amount, currency);
    }
}
public abstract class WalletCreditDebitSupport extends RestSupport {
    @SuppressWarnings("unchecked")
    public Response process (
            UriInfo uriInfo,
            String customerId,
            String walletId,
            String rrn,
            String detail,
            BigDecimal amount,
            String currency)
            throws IOException, URISyntaxException, BLException, ISOException
    {
        org.jpos.transaction.Context ctx = new org.jpos.transaction.Context();
        ctx.put(TXNNAME, getClass().getSimpleName());
        ctx.put(PAN, customerId);
        ctx.put(WALLET_NUMBER, walletId);
        ctx.put(CURRENCY, currency);
        ctx.put(RRN, rrn);
        ctx.put(AMOUNT, amount);
        ctx.put(DETAIL, detail);

        int result = queryTxnMgr(ctx);
        Map<String,Object> resp = createResponseMap();
        if (result == TransactionManager.PREPARED) {
            String irc = (String) ctx.get (IRC);
            resp.put("success", TRAN_APPROVED.equals(irc));
            resp.put("balance", ctx.get (LEDGER_BALANCE));
            return Response.ok(toJson(resp), MediaType.APPLICATION_JSON)
                .status(Response.Status.CREATED)
                .location(URI.create(uriInfo.getAbsolutePath().toString())).build();
        } else {
            resp.put("success", false);
            resp.put("message", ctx.getString(TxnConstants.RC));
            return Response.status(Response.Status.BAD_REQUEST).entity(toJson(resp)).build();
        }
    }
}

And the best part of this, is that we get our familiar ‘Debug’ and ‘Trace’ events:

<log realm="rest-txnmgr" at="Wed Oct 09 20:07:01 UYST 2013.489" lifespan="925ms">
<debug>
    rest-txnmgr-0:idle:1
            prepare: org.jpos.jcard.PrepareContext NO_JOIN
            prepare: org.jpos.jcard.Switch READONLY NO_JOIN
        selector: walletcreate close trigger-response
            prepare: org.jpos.jcard.rest.ValidateParams READONLY NO_JOIN
            prepare: org.jpos.transaction.Open READONLY NO_JOIN
            prepare: org.jpos.jcard.CreateTranLog NO_JOIN
            prepare: org.jpos.jcard.CheckCustomer NO_JOIN
            prepare: org.jpos.jcard.rest.WalletCreateParticipant READONLY NO_JOIN
            prepare: org.jpos.transaction.Close READONLY
            prepare: org.jpos.jcard.rest.TriggerResponse READONLY
            prepare: org.jpos.transaction.Debug READONLY
            commit: org.jpos.transaction.Close
            commit: org.jpos.jcard.rest.TriggerResponse
            commit: org.jpos.transaction.Debug
    head=2, tail=2, outstanding=0, active-sessions=2/64, tps=0, peak=0, avg=0.00, elapsed=925ms
    <profiler>
    prepare: org.jpos.jcard.PrepareContext [0.1/0.1]
    prepare: org.jpos.jcard.Switch [0.0/0.2]
    prepare: org.jpos.jcard.rest.ValidateParams [0.2/0.5]
    prepare: org.jpos.transaction.Open [0.7/1.2]
    prepare: org.jpos.jcard.CreateTranLog [2.3/3.6]
    prepare: org.jpos.jcard.CheckCustomer [8.5/12.2]
    prepare: org.jpos.jcard.rest.WalletCreateParticipant [556.5/568.7]
    prepare: org.jpos.transaction.Close [0.1/568.9]
    prepare: org.jpos.jcard.rest.TriggerResponse [0.1/569.1]
    prepare: org.jpos.transaction.Debug [0.1/569.2]
    commit: org.jpos.transaction.Close [344.9/914.2]
    commit: org.jpos.jcard.rest.TriggerResponse [0.6/914.8]
    commit: org.jpos.transaction.Debug [9.8/924.7]
    end [0.9/925.7]
    </profiler>
</debug>
</log>
<log realm="debug" at="Wed Oct 09 20:07:03 UYST 2013.845">
<commit>
    <id>2</id>
    <context>
    <entry key='PAN'>0000000005</entry>
    <entry key='RRN'>000000000001</entry>
    <entry key='LOGEVT'><log realm="" at="Wed Oct 09 20:07:03 UYST 2013.846" lifespan="38ms">
<log>

    <![CDATA[
<transaction id="97" date="20131009200703" post-date="20131009" journal="jcard">
<detail>WalletCredit 2</detail>
<entry account="11.001.00" type="debit" layer="840">
    <amount>100.00</amount>
</entry>
<entry account="21.0000000005.1" type="credit" layer="840">
    <amount>100.00</amount>
</entry>
</transaction>
    ]]>
</log>
</log>
</entry>
    <entry key='WALLET_NUMBER'>0000000005</entry>
    <entry key='CARDHOLDER'>org.jpos.ee.CardHolder@64a5efa3[id=5]</entry>
    <entry key='DB'>org.jpos.ee.DB@63130f0d</entry>
    <entry key='LEDGER_BALANCE'>100.00</entry>
    <entry key='TRANLOG'>org.jpos.ee.TranLog@76250246[id=172]</entry>
    <entry key='AMOUNT'>100.00</entry>
    <entry key='SWITCH'>WalletCredit (walletcredit close trigger-response)</entry>
    <entry key='ACCOUNT'>org.jpos.gl.FinalAccount@20c0fe7c[id=49,code=21.0000000005.1]</entry>
    <entry key='TXNRESULT'>1</entry>
    <entry key='TXNMGR'>rest-txnmgr</entry>
    <entry key='IRC'>0000</entry>
    <entry key='CURRENCY'>840</entry>
    <entry key='DETAIL'>Test wallet credit</entry>
    <entry key='WALLET'>org.jpos.ee.Wallet@71df1c8a[id=1,number=0000000005]</entry>
    <entry key='CAPTURE_DATE'>Wed Oct 09 00:00:00 UYST 2013</entry>
    <entry key='TXNNAME'>WalletCredit</entry>
    <entry key='TIMESTAMP'>Wed Oct 09 20:07:03 UYST 2013</entry>
    <entry key='PROFILER'>
    <profiler>
        prepare-context [0.0/0.0]
        open [1.1/1.2]
        close [533.2/534.4]
        end [10.9/545.4]
    </profiler>
    </entry>
    <entry key='ISSUER'>org.jpos.ee.Issuer@6f59db4f[id=1,name=jcard]</entry>
    </context>
</commit>
</log>
<log realm="rest-txnmgr" at="Wed Oct 09 20:07:03 UYST 2013.864" lifespan="553ms">
<debug>
    rest-txnmgr-0:idle:2
            prepare: org.jpos.jcard.PrepareContext NO_JOIN
            prepare: org.jpos.jcard.Switch READONLY NO_JOIN
        selector: walletcredit close trigger-response
            prepare: org.jpos.jcard.rest.ValidateParams READONLY NO_JOIN
            prepare: org.jpos.transaction.Open READONLY NO_JOIN
            prepare: org.jpos.jcard.CreateTranLog NO_JOIN
            prepare: org.jpos.jcard.CheckCustomer NO_JOIN
            prepare: org.jpos.jcard.CheckWallet READONLY NO_JOIN
            prepare: org.jpos.jcard.rest.WalletTransactionParticipant READONLY NO_JOIN
            prepare: org.jpos.transaction.Close READONLY
            prepare: org.jpos.jcard.rest.TriggerResponse READONLY
            prepare: org.jpos.transaction.Debug READONLY
            commit: org.jpos.transaction.Close
            commit: org.jpos.jcard.rest.TriggerResponse
            commit: org.jpos.transaction.Debug
    head=3, tail=3, outstanding=0, active-sessions=2/64, tps=0, peak=1, avg=0.50, elapsed=553ms
    <profiler>
    prepare: org.jpos.jcard.PrepareContext [0.1/0.1]
    prepare: org.jpos.jcard.Switch [0.1/0.2]
    prepare: org.jpos.jcard.rest.ValidateParams [0.2/0.5]
    prepare: org.jpos.transaction.Open [0.8/1.3]
    prepare: org.jpos.jcard.CreateTranLog [2.4/3.8]
    prepare: org.jpos.jcard.CheckCustomer [8.7/12.5]
    prepare: org.jpos.jcard.CheckWallet [7.3/19.9]
    prepare: org.jpos.jcard.rest.WalletTransactionParticipant [487.2/507.2]
    prepare: org.jpos.transaction.Close [0.1/507.3]
    prepare: org.jpos.jcard.rest.TriggerResponse [0.0/507.4]
    prepare: org.jpos.transaction.Debug [0.0/507.5]
    commit: org.jpos.transaction.Close [27.0/534.5]
    commit: org.jpos.jcard.rest.TriggerResponse [0.1/534.7]
    commit: org.jpos.transaction.Debug [18.3/553.0]
    end [1.0/554.1]
    </profiler>
  </debug>
</log>

Hope you consider using the TransactionManager the next time you need to write a REST API call.


jPOS Programmer’s Guide

$
0
0

For those not regularly checking the [jPOS main site], please note there’s a new Learn tab in the main site with a direct link to download the new and free jPOS Programmer’s guide draft.

While it’s still work in progress, it provides useful information related to jPOS 1.9.x series that complement the standard for-sale guide.

Feedback is of course very Welcome!

W55Y

$
0
0

What API designers could learn from the payments industry


We all know the Fallacies of Distributed Computing:

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn’t change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.

I think there’s a 9th: REST API designers know the meaning of the word Fallacy.

So to clarify and as a public service, we need to start talking about distributed computing facts instead:

  1. The network IS NOT reliable.
  2. Latency IS NOT zero.
  3. Bandwidth is limited.
  4. The network is not secure (you should know that already)
  5. Topology does change, and at the worst possible time.
  6. The administrator is dead.
  7. Moving bits around the net has a cost.
  8. The network heterogeneous.

Interesting enough, a very old protocol, ISO-8583, designed in the 80s to support slow 300 and 1200 bps dialup links is extremely aware of these facts and work around these problems with a very simple and asynchronous message flow.

Take a look any popular REST payments API, you usually call a method to authorize a transaction and you pass parameters like this:

  • Card and Expiration (or a token)
  • Amount, perhaps currency
  • A description

Lovely, simple, and wrong!

Using that popular design, you POST the transaction and pray to get a response from the server. If you get a response, either a 200, 201, or even a 500 from the server, everything is alright, but if the request times out, or you go down, or the servers goes down, or the ISP is reconfiguring a router, you can’t really tell what happened. If you’re lucky and the server didn’t receive your request, then that’s fine, you can retry it, but if the server did receive your request, and authorized against its upper level acquirer, then you’ll have an angry cardholder with a hold in its account, and perhaps even a debit (because I don’t see many payment gateways accepting reconciliation messages or settlement files).

In ISO-8583, when you send an authorization request or a financial request and you don’t get a response, you queue a reversal in a persistent store and forward (SAF) queue. So the next time you contact the server, before sending a new transaction, you send that reversal. If you receive a response from the server, but the response comes late and you have already timed out, you also send a ‘late-response’ reversal to the server.

In the same way, when you post a transaction that already took place (i.e. adjusting a previously approved transaction for a different amount, something that happens all the time at restaurants that support tips, or gas pump transactions where you approve for $100 but then complete for $20), and you don’t get a response, you send a retransmission of the same transaction, as many times as necessary in order to deal with transient network problems.

In order to support reversals and retransmissions, you need a transaction unique identifier. Different networks use different transaction identifiers, either the Terminal ID + a Serial Trace Audit Number, or a Retrieval Reference Number, or in ISO-8583 v2003 the new Transaction life cycle identification data, the client generates a unique identifier so that when you send a follow-up transaction, you can send the server a reference to the original one.

I believe all payment APIs out there (including those super very cool ones) should consider adding three things:

A new parameter, RRN

Client code could generate a UUID and use it as the RRN (Retrieval Reference Number)

Support reversals (DELETE)

Could be as simple as adding a new verb, DELETE, where the only parameter can be — along with authentication data — the RRN

Support for retransmissions (PUT)

If your initial transaction was a POST, I propose that you also accept a PUT, with the same parameters. On the server side the difference between the POST and the PUT operations would be just an extra step to make sure that you didn’t process the original POST, and return the original auth code if that was the case.

Of course, if you’re designing a new super cool minimalistic REST API you probably don’t listen to people with grey hair, but just in case, my 2c :)

@apr

jPOS 1.9.4 released

$
0
0

jPOS 1.9.4 has been released and it includes the following changes, most notably OSGi support.

  • Added len,description constructor to IF_NOP
  • Added IFB_LLHEX (can be used to deal with encrypted track2s)
  • Added HexNibblesPrefixer (required by IFB_LLHEX)
  • Added OSGi support
  • Added ‘qnode’ (OSGi testbed)
  • DirPoll now supports file compression
  • Profiler can be reenabled
  • TransactionManager PAUSED transactions reuse profiler to provide combined elapsed times
  • Added org.jpos.iso.GenericSSLSocketFactory
  • jPOS-105 QServer should unregister as a space listener
  • jPOS-106 ChannelAdaptor reconnect flag uses non serializable Object
  • jPOS-108 FSDMsg consuming input stream
  • DirPoll.scan is now protected
  • MUX interface now extends ISOSource
  • QMUX.send now checks isConnected()
  • DirPoll now accepts a priority.regex boolean property (73c2f84)
  • jPOS-110 QMUX major start-up issue (1526dab)
  • DirPoll Retry when archive.timestamp is set to true (pull/33)
  • Generate optional app specific version info 02f739a

See full ChangeLog.

1.9.4 is available in Maven Central.

DUKPT support in 1.9.5-SNAPSHOT

$
0
0

Now that 1.9.4 has been released, we moved the development version to 1.9.5-SNAPSHOT (nightly builds available in jPOS Maven Repository).

We’ve added DUKPT support to JCESecurityModule. There’s no documentation yet, but for a an easy to read example, you can take a look at the DUKPTTest class.

Pull configurations

$
0
0

You might have heard a thousand times, push is good, IoC is good. pull is bad, and I have to agree.

jPOS components get their configurations pushed by the Q2 container when they implement the Configurable interface.

But if you’re used to jPOS configurations, which can be filtered at build time by the Gradle build based on the desired target profile, or can be decorated by means of @vsalaman‘s contributed decorator, you may find yourself reinventing the wheel and figuring out how to get some Configuration object into a non jPOS component (such as a servlet or any other non jPOSsy code).

To solve that in a standard way, we’ve created QConfig. QConfig is a minimalistic QBean that just register its own Configuration object into the NameRegistrar (with a “config.” prefix). So for example, you can deploy something like this:

<config>
    <property name="test" value="ABC" />
    <property name="test1" value="123" />
    <property file="cfg/myprops.cfg" />
</config>

The word config has been registered in QFactory.properties so the <config> element above is equivalent to:

<config name='config' class='org.jpos.q2.qbean.QConfig'>
    ...
    ...
</config>

So non jPOS running inside Q2 can get a reference to ‘config’ configuration by calling:

Configuration cfg = QConfig.getConfiguration("myconfigname");

While we were at it, we added the ability to merge configuration objects in other QBeans; There many ways to achieve the same without using this technique, for example, you can use <property file="xxx" /> in different QBeans to pull the same config, or you can use XML entities for that, but, because we can, we just offer this additional way to do it, which is quite simple.

Any QBean descriptor now accepts an optional attribute called merge-configuration that accepts a list of QConfig configurations and merges them on-the-fly at QBean configuration time. Here is a simple example:

deploy/00_config.xml

<config>
    <property name="test" value="ABC" />
    <property name="test1" value="123" />
</config>

deploy/01_config.xml

<config name='config1'>
    <property name="test2" value="XYZ" />
</config>

deploy/90_script.xml

<script merge-configuration='config, config1'>
    print ("TEST: " + cfg.get("test"));
    print ("TEST2: " + cfg.get("test2"));
</script>

Because this merge-configuration handling is honored by QFactory, used by other components such as the TransactionManager to instantiate its participants, you can use it in TM participants as well (i.e. to pull reused configuration, such as result codes and the like).

QMUX internal space

$
0
0

The jPOS QMUX service uses the Space (usually the default global space) in order to communicate with other components such as the ChannelAdaptor or QServer using its in and out queues. But in addition, it implements the MUX interface by storing selected parts of a request message (known as the QMUX key), as shown in the picture below:

QMUX Space Dance

In high traffic systems, with many QMUXes, every thread waiting for a response would wake up, albeit for a tiny little while, when something happens in the space. This small patch done in 1.9.7 keep using the global space for the QMUX in and out queues, but uses an internal Space (currently a TSpace) to perform the key-matching dance.

The change should be transparent for most users, but we’ve seen some implementations out there that dangerously peek and poke our entries in the Space, and actually this patch, in addition to improve performance, intend to discourage such use in the future (by not exposing the internal space to other components). But anyway, for backward compatibility, we honor a new property reuse-space that if set to true, would revert to the old implementation, using the global space.

Context trace

$
0
0

In jPOS 1.9.7 cce6a27 we’ve added a new transient trace flag to the Context that can be very useful during development.

Those of you using the TransactionManager with a large number of participants know that sometimes it becomes difficult to know who placed what in the Context.

You get to see a Context with many entries (REQUEST, RESPONSE, IRC, SOURCE, TRANLOG, TIMESTAMP, AMOUNT, PAN, ADDITIONAL_AMOUNT, etc.) but pin pointing where a given value is place gets difficult.

If the Context new trace boolean is set to true (something you can do via a configuration property in one of the initial participants such as PrepareContext or even closer to the incoming message, in the ISORequestListener when you create the Context), the Debug output would look like this:

   <profiler>
        REQUEST='<-- 2100 000000000162 29110001        ' [org.jpos.jcard.IncomingSupport.process(IncomingSupport.java:52)] [0.1/0.1]
        SS='JCARD' [org.jpos.jcard.IncomingSupport.process(IncomingSupport.java:53)] [0.0/0.2]
        TXNNAME='100.00' [org.jpos.jcard.IncomingSupport.process(IncomingSupport.java:68)] [0.0/0.2]
        SOURCE='org.jpos.iso.channel.CSChannel@2c42dc17' [org.jpos.jcard.IncomingSupport.process(IncomingSupport.java:69)] [0.0/0.2]
        WATCHDOG='org.jpos.jcard.IncomingSupport$1@1aad5bb2' [org.jpos.jcard.IncomingSupport.process(IncomingSupport.java:76)] [0.0/0.3]
     prepare-context [4.6/4.9]
        TIMESTAMP='Tue Apr 08 12:29:20 UYT 2014' [org.jpos.jcard.PrepareContext.prepare(PrepareContext.java:33)] [0.0/5.0]
        TXNMGR='txnmgr' [org.jpos.jcard.PrepareContext.prepare(PrepareContext.java:38)] [0.0/5.0]
        DB='org.jpos.ee.DB@3f3fd620' [org.jpos.transaction.TxnSupport.getDB(TxnSupport.java:157)] [0.1/5.1]
        TX='org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction@4d0ee1de' [org.jpos.transaction.Open.prepare(Open.java:38)] [37.4/42.5]
     open [0.0/42.5]
        SWITCH='100.00 (authorization prepareresponse logit close sendresponse)' [org.jpos.jcard.Switch.select(Switch.java:39)] [0.0/42.6]
        PCODE='000000' [org.jpos.jcard.CheckFields.putPCode(CheckFields.java:163)] [0.0/42.7]
        PCODE_TXN_TYPE='00' [org.jpos.jcard.CheckFields.putPCode(CheckFields.java:164)] [2.6/45.3]
        PCODE_ACCOUNT_TYPE='00' [org.jpos.jcard.CheckFields.putPCode(CheckFields.java:165)] [0.0/45.4]
        PCODE_ACCOUNT2_TYPE='00' [org.jpos.jcard.CheckFields.putPCode(CheckFields.java:166)] [0.0/45.4]
        TRANSMISSION_TIMESTAMP='Tue Apr 08 12:29:20 UYT 2014' [org.jpos.jcard.CheckFields.putTransmissionTimestamp(CheckFields.java:301)] [0.0/45.5]
        LOCAL_TRANSACTION_TIMESTAMP='Tue Apr 08 12:29:20 UYT 2014' [org.jpos.jcard.CheckFields.putLocalTransactionTimestamp(CheckFields.java:297)] [0.0/45.5]
        AMOUNT='100.01' [org.jpos.jcard.CheckFields.putAmount(CheckFields.java:231)] [0.0/45.6]
        CURRENCY='840' [org.jpos.jcard.CheckFields.putAmount(CheckFields.java:232)] [0.0/45.6]
        PAN='6009330000000033' [org.jpos.jcard.CheckFields.putPAN(CheckFields.java:180)] [0.0/45.7]
        EXP='4912' [org.jpos.jcard.CheckFields.putPAN(CheckFields.java:181)] [0.0/45.7]
        TID='29110001        ' [org.jpos.jcard.CheckFields.assertFields(CheckFields.java:127)] [0.0/45.7]
        NETWORK_CAPTURE_DATE='Tue Apr 08 12:00:00 UYT 2014' [org.jpos.jcard.CheckFields.putCaptureDate(CheckFields.java:275)] [0.0/45.8]
        MID='001001' [org.jpos.jcard.CheckFields.assertFields(CheckFields.java:130)] [0.0/45.8]
        TRANLOG='org.jpos.ee.TranLog@7dcadb39[id=166]' [org.jpos.jcard.CreateTranLog.doPrepare(CreateTranLog.java:99)] [2.6/48.5]
        CAPTURE_DATE='Tue Apr 08 00:00:00 UYT 2014' [org.jpos.jcard.CreateTranLog.doPrepare(CreateTranLog.java:100)] [0.0/48.5]
     create-tranlog [0.0/48.6]
        CARD='org.jpos.ee.Card@42c613bd[id=5,pan=600933...0033]' [org.jpos.jcard.CheckCard.prepare(CheckCard.java:65)] [10.0/58.6]
        ISSUER='org.jpos.ee.Issuer@61188c80[id=1,name=1]' [org.jpos.jcard.CheckCard.prepare(CheckCard.java:97)] [2.8/61.5]
        CARDPRODUCT='org.jpos.ee.CardProduct@60ea1534[id=3,name=3]' [org.jpos.jcard.CheckCard.prepare(CheckCard.java:98)] [0.0/61.5]
     check-card [0.0/61.5]
     check-terminal [6.3/67.9]
        ACQUIRER='org.jpos.ee.Acquirer@2b001c59[id=1,name=1]' [org.jpos.jcard.CheckAcquirer.prepare(CheckAcquirer.java:51)] [6.6/74.5]
     check-acquirer [0.0/74.5]
        ACCOUNT='org.jpos.gl.FinalAccount@7f7a1bec[id=28,code=22.0000000002]' [org.jpos.jcard.SelectAccount.prepare(SelectAccount.java:49)] [1.0/75.6]
     select-account [0.0/75.6]
     check-previous-reverse [3.2/79.1]
     check-velocity [18.2/97.3]
     authorization-start [0.0/97.4]
        GLSESSION='org.jpos.gl.GLSession@5976dbd8[DB=org.jpos.ee.DB@3f3fd620]' [org.jpos.jcard.JCardTxnSupport.getGLSession(JCardTxnSupport.java:146)] [1.7/99.2]
     authorization-pre-lock-journal [0.0/99.2]
     authorization-post-lock-journal [1.7/101.0]
     authorization-compute-balance [7.0/108.0]
        ACCOUNT='org.jpos.gl.FinalAccount@7f7a1bec[id=28,code=22.0000000002]' [org.jpos.jcard.Authorization.prepare(Authorization.java:110)] [0.1/108.2]
     authorization-get-credit-line [8.1/116.3]
        RC='not.sufficient.funds' [org.jpos.jcard.Authorization.prepare(Authorization.java:195)] [0.8/117.1]
        EXTRC='Credit line is 0.00, issuer fee=6.75' [org.jpos.jcard.Authorization.prepare(Authorization.java:197)] [0.0/117.1]
     authorization [0.0/117.2]
     create-cache-ledger [6.3/123.5]
     create-cache-pending-and-credit [8.4/132.0]
     create-cache-pending [47.5/179.5]
        LEDGER_BALANCE='100.00' [org.jpos.jcard.ComputeBalances.prepare(ComputeBalances.java:84)] [0.1/179.6]
        AVAILABLE_BALANCE='100.00' [org.jpos.jcard.ComputeBalances.prepare(ComputeBalances.java:85)] [0.0/179.7]
     compute-balances [0.0/179.7]
        IRC='1016' [org.jpos.jcard.PrepareResponse.setRetCode(PrepareResponse.java:142)] [2.9/182.6]
        RESPONSE='<-- 2110 000000000162 29110001        ' [org.jpos.jcard.PrepareResponse.prepareForAbort(PrepareResponse.java:56)] [19.9/202.6]
     close [9.5/212.1]
        REQUEST='<-- 2100 000000000162 29110001        ' [org.jpos.jcard.ProtectDebugInfo.protect(ProtectDebugInfo.java:43)] [647.9/860.1]
     end [0.2/860.3]
   </profiler>

Although it may look verbose, this could be very useful while coding, it helps you spot problems and assist on debugging.

I just found one issue in the jCard system while writing this blog post, look at this, we set the ACCOUNT in SelectAccount

        ACCOUNT='org.jpos.gl.FinalAccount@7f7a1bec[id=28,code=22.0000000002]' [org.jpos.jcard.SelectAccount.prepare(SelectAccount.java:49)] [1.0/75.6]

then we set it again in Authorization.

        ACCOUNT='org.jpos.gl.FinalAccount@7f7a1bec[id=28,code=22.0000000002]' [org.jpos.jcard.Authorization.prepare(Authorization.java:110)] [0.1/108.2]

Not a big deal, it’s the same account, but worth checking why we are doing that.


How an audit can make you less secure

$
0
0

First a disclaimer: I know excellent auditors, starting with my friend Dave from the Payments Systems Blog, but I also know really retarded ones, and here is a little story of a system I built some 8 or 10 years ago that would have been resilient to the HeartBleed bug, but of course, the auditor couldn’t understand it, and it had the word MD5 which puts them to cry like tiny little girls, so we have to “improve” it to make it less secure.

HeartBleed

I’ve been into amateur packet radio and BBS systems in the 80s where monitoring the air, or a serial line was easy, so things like one time passwords and two way authentication have been always within my area of interest. When it came to provide an internal user interface to jPOS, and I had to design a login form, I wanted to protect the user’s passwords against an operator with access to the system; I wanted people to be able to use a password like TheBossIsAnIdiot if they wanted, making it difficult for the programmer/operator on the server side to see it.

So the solution was easy. The server would generate a nonce and send it to the client, the client would use that nonce, some other data (like the session id) and the password, and send an MD5 of all that to the server.

I wasn’t and I’m not a JS expert, and we didn’t have things like jQuery or Angular those days, but I wrote this little piece of code that implemented the login form:

Login Form

function doHash(frm) {
    var username = frm.username.value;
    var password = frm.password.value;

    if (username.length < 3 || password.length < 3) {
        alert ("Invalid Username and/or Password.");
        return false;
    }
    var hash     = frm.hash.value;
    var seed     = readCookie ("JSESSIONID") + hash;
    var pass     = hex_md5 (username + password);

    frm.password.readOnly = true;
    frm.password.value = hex_md5 (seed + pass);
}

The server would do the same computation in order to verify the login.

I wasn’t comfortable with the solution, because somehow the initial password was either entered by the operator, or sent via email, so I forced a password change in the first login, and the password change would just send an XOR of the existing password hash so that the server could apply the same XOR and upgrade the password to the latest version.

But here comes the auditor, with a bucket in his head, and reports the process as insecure (with strong copy/pasted wording to scare management) for the following reasons:

  1. On password change, the complexity of the password (you know, password length, use lower and upper and all that crap) is validated in client side, not in the server side. And there goes a rant that says “Passwords should be at least XXX characters, and have lower case and upper case letters in order to be secure, yada yada yada”, so the manager would look at me like “We trusted in you… look what you did to us, our passwords can be less than XXX characters if the user hacks the client side code!”).

  2. It has the word MD5 and we just heard that MD5 is broken (remember this was 2005~2006), and there goes the rant about how MD5 was recently cracked, and again, the manager would give you that look, like saying, you’re a lost case, I’ve been always scared of open source and the freetards around it.

I think the tradeoff between having one user hacking the JS to force himself a weak password, compared to protecting all users from easy eavesdropping is a good one. I also think that sending an MD5 over the wire is better than sending a clear password (although there’s of course SSL involved, I’m talking about ‘clear’ from an application perspective). It has the side benefit of staying secure while in memory on the server side.

Auditors are obsessed with the things they were told to look after, SQL injection, XSS, or things they test with automated tools. That’s fine and welcome (you don’t need an auditor for that, BTW, but it’s good to have more eyes on the problem). But I’ve never seen an auditor testing to SQL inject say a field 35/45 in an ISO8583 message, something anyone can do forging the track2 of a card and going to a shop around the corner. Take what they say with a grain of salt, and remember not all, but most of them, are just talkers.

I’m playing with the idea of making the client perform a really large number of iterations on the hash, to slow it down (kind of a client side bcrypt) without requiring too much CPU on the server, and then have the server do some more rounds, perhaps with a bcrypt final pass. I’m planning to send some timing information to the server too, in order to alert on client hardware/software changes (how long did it take you to run 100K hashes?). We’ll have to figure out how to explain this in our next audit..

You want a timeout

$
0
0

Every single week, for the last 14 years I have discussions with developers, CTOs and CIOs about channel timeouts.

The discussions usually start by a customer requirement asking us to keep established socket connections forever.

They say “We want the socket to stay always connected, forever. We don’t want to see disconnects. Our systems are very reliable, our remote endpoint partners are very reliable, we don’t want a timeout”.

So I usually start with the Fallacies of distributed computing but I’m never lucky. I try to explain that I don’t want to die, but it just so happens that I will certainly die, sooner or later. It’s life.

Disconnections happen, networking problems happen all the time, router and firewall reboots, and the most evil situation, a paranoid firewall administrator configuring very tight timeouts.

When jPOS is the client, and the channel is idle for a long period of time, having no timeout is actually not a big deal. Imagine a situation where the channel is connected for say 5 minutes, but our paranoid FW administrator had set a timeout of 3 minutes to disconnect the session. While jPOS believes we are connected, we are actually not connected, so when a real transaction arrives, and we try to send it, we find out we are no longer connected. That will raise an exception, we’ll reconnect, and we’ll send the message (a few seconds later). So the problem is just a delay that may put us out of the SLA for this particular transaction, but it’s still not a big deal, the system will recover nicely.

But when jPOS is the server, and we don’t have a timeout, the client will establish a new connection, but the old one will remain connected forever. A few hours/days later, these connection will accumulate and we’ll hit the maxSessions of the QServer configuration (see the Programmer’s Guide section 8.4). Only way to recover is to restart that particular QServer, something that needs to be done manually.

You can set SO_KEEPALIVE at the channel level in order to detect these broken connections, and in order to prevent some firewalls from disconnecting your session, but the KEEPALIVE time is OS dependent.

Our recommendation is to send network management messages from time to time (i.e. every 5 minutes) and have a reasonable timeout of say 6 minutes.

There’s another situation where you want a timeout. Imagine an ideal network (I call it ‘Disney LAN’) where the connection remains ESTABLISHED from a TCP/IP standpoint, but the remote host’s application is dead and is not answering to your replies. You can of course detect that at the application level (i.e. MUX) and proactively initiate a reconnection, but if that logic fails (or you never implemented it), a reasonable timeout will recover automatically from the situation. The remote host doesn’t reply, the call to channel receive time us out out, we reconnect, and with a little bit of luck, we get to connect to a new session that actually works.

Poor man in the middle

$
0
0

This is a personal story not related to jPOS, but it’s somewhat related to payment networks and security, so I hope you enjoy it.

Back in the 80s here in Uruguay, when I was in my early 20s, credit cards started to become popular and merchants started to use CATs (credit authorization terminals) that used some mysterious protocol to talk to some servers in order to get authorizations.

I didn’t have a card, but my partner in crime since age 7 — my friend @dflc — got one, I think it was a VISA.

We analyzed the card and of course, we were very interested to figure out what was written in that magnetic stripe, but we didn’t have a reader. We probably tried with some tape recorder heads in order to get some audio, I don’t remember, but I’m sure we had to try that.

One day, we called a store in the new mall in the city, Montevideo Shopping Center, for personal reasons (probably wanted to buy a present or something like that). Not 100% sure I was the one that placed the call, but I think I was. I’m very anxious, so I never asked our secretary to place the calls for me, if a number was busy I would dial 100 times in a minute until the call completes (and this was rotary dialing). If I recall correctly, the store was Pascualini, still popular these days. After busy, busy, busy, I got to hear some ‘click click click click’, followed by silence…

When you are into modems, and BBSs, there’s not doubt what you do in a situation like that, you whistle! A simple short whistle starting at around 900Hz and going up to 1200~1300Hz is easy to whistle and you get V.21 and Bell 103 modems to start their connection establishing dance.

So I whistled (or my friend), and heard the modem, we knew it wasn’t a FAX (no birps), we knew exactly what was that, that new tiny CAT thing, an Omron CAT 90 that we’ve started to see at some stores.

cat90

We knew exactly what was happening, that thing wasn’t detecting that the line was free before blindly start dialing. Our eyes opened, we simultaneously smiled, it took us probably a few milliseconds to know what was next: Man in the middle!

We also saw a business opportunity (we were hungry): we knew we could build a little hardware to sense the DC voltage of a free versus busy line and sell it to the local acquirers (free line tone were not standard those rotary-dialing days).

As a first step, we planned for a proof of concept. We wanted to monitor a transaction, record it on tape, and let the real acquirer process the transaction. We were into BBSs and ran our own BBS those days. My friend had his home land line, plus 8 BBS lines in his bedroom (along side with a MicroVAX with two SCSI 500MB mirrored noisy disks spinning day and night), so we could use one line to dial the merchant, and another one to dial the acquirer. We had to do some war dialing and small social engineering to get the acquirer’s listed phone numbers, lucky for us, numbers were in the phone book.

We played with phones since we were 8 or 10 years old, I remember I used to short-circuit the phone to break my mother’s long calls when I needed it. We did phone patches for the ham radio stations, my friend @dflc used to develop his own telephone answering machine using discrete IC components (4011s and 4001s here and there) and a pair of cassette recorders, so the required hardware was ready in a couple days.

We needed a way to know when to initiate a call to the store exactly when the transaction was going to be initiated. There were no cell phones those days, but of course, we had VHF handhelds, actually a pair of Icom IC-02ATs. I used handhelds since high school, I thought they were the coolest thing to have and I still don’t understand why ladies were not impressed by a guy with that kind of technology hanging in his belt, unbelievable…

ic02at

The distance between my friends’ and the mall was small, just 600 meters

map

The plan was easy: My friend (who owned the card) would go to the mall, buy something, send some signals (without talking, just a few push to talk pushes – for those in the know, that would be an A1 encoding) at the right time when the lady at the store was about to process the transaction. I’d be in our NOC (his bedroom) calling the merchant, the acquirer, hitting ‘REC’ on the recorder, and patching both lines with our little hardware (also monitoring with headphones).

We did a test VHF connection and although those handheld transceivers could be used to cover 80+ kilometers with good conditions, the mall was a Faraday cage, I didn’t hear him. So we needed a plan B. My friend’s brother got a mobile VHF in his car, plenty of power (50W). So we called him (via radio) and luckily he was close to the area. We explained the mission, although no questions were asked, he would take us seriously. He parked the car close to the mall (so he could listen the short transmission from inside the mall) and QSP to me. (QSP, Q2 aka QSP version 2, rings a bell?). FTW, QSP is the Q-signal code for “relay message”.

So we did the transaction, everything worked on the first try, @dflc bought himself a leather wallet or a belt, can’t remember, knowing him, I’m sure he still have it as a trophy and a way to remember that fun hacking day. The transaction was properly approved by the Visa acquirer (who BTW, now runs jPOS), we were just men in the middle.

We’ve got the transaction and I remember we analyzed it in several ways, replayed it against different modems, etc.

On the business side, we had meetings with the local card acquirers where we explained their vulnerability and offered a solution. Of course, they didn’t like the fact that we, young suspicious “hackers/crackers” were telling them what to do so they did nothing.

We kept our grin for a good while, it was a nice, albeit pretty simple, hack.

TransactionManager getId and getContext

$
0
0

TransactionParticipants get called by the TransactionManager using
their prepare, prepareForAbort, commit and abort callbacks, but for
situations where one needs access to the Context in a deeper class,
called by the participant (i.e. Managers), we now have a couple of
static ThreadLocal based methods:

  • Serializable getSerializable()
  • Context getContext() (in case your Serializable is actually an instance of
    org.jpos.transaction.Context)
  • Long getId ()

Please note returned values may be null when run outside the TM life-cycle.

Also note that the TM takes care of PAUSED transactions, setting
these values on the resumed thread.

See ChangeLog – 6da5f3 for details.

jPOS 1.9.8 released

$
0
0
  • jPOS 1.9.8 has been released, the new development version is 1.9.9-SNAPSHOT
  • jPOS-EE 2.0.6-SNAPSHOT has now upgraded dependencies, including support for Jetty 9
  • jPOS-template has a new genDocker task that installs a jpostemplate image

See ChangeLog for details.

Viewing all 39 articles
Browse latest View live