Wednesday, March 28, 2012


Amalga Dialogs Broken due to Sync Parser not Starting


The other day when developing components in Amalga I found that I was unable to checkout dialogues which had been created and was also unable to create any new dialogues.


The first step when encountering unexpected issues in Amalga is to check the azyxxi logs table (or use your handy log viewer).  In this instance I found a log entry reading:

[Service=Amalga.ScriptEngine.Sync ] Amalga.PackageCountIsZero:  cause=No packages for service instance name [Amalga.ScriptEngine.Sync] on machine FOO

Ok so there's something wrong with the sync parser.  I went to window's services (I know you're supposed to use the management console but this was in dev) and tried to cycle the sync parser.


Services:  Windows could not stop the Amalga.ScriptEngine.Sync service on Local Computer.
Error 0xffffffff: 0xffffffff


Pretty common error.  It means that the parser service you just tried to shut down was never running in the first place.  Whenever seeing this error always check the windows event log:




[Amalga.ScriptEngine.Sync]
PID: [5004]
eventID: [Parser_EVENTID_10]: (10)
type: [Error]
resourceID: [262154]
 ScriptEngine: [Amalga.ScriptEngine.Sync] Package or service error.  An error occurred with the package or the service. Messages will no longer be processed. PackageName: [_ASE_Initialization] PackageID: [00000000-0000-0000-0000-000000000000] MessageID: [(unknown)] ErrorID: [Stop] Amalga.PackageCountIsZero:
cause=No packages for service instance name [Amalga.ScriptEngine.Sync] on machine FOO;
mitigation=No packages for the service instance were found. The likeliest reason might be that an upload needs to be done using Script Engine Explorer, or a package needs to be added to this service instance. It can also happen if a third-party dll was not found. All scripts should be derived from the appropriate base class. A message script needs to derive from ParserMessageBase. Table scripts must derive from TableScriptBase, and Key scripts must derive from KeyScriptBase.;
exceptionerrorResponse=Stop;

Ok pretty straightforward.  There are no packages uploaded for the Sync Parser to run.  I then went into ScriptEngineExplorer to find the missing package.  Except there is no default package named "SyncParser".  WTFs ensure for about 2 days.  Checking the Amalga forums I stumble across this post detailing several error messages concerning both dialogues and the sync parser.  Which lead to this little tidbit:  "UM_USER package is not subscribed to the ScriptEngine service."  Wierd.  I checked out our prod system where (usually) things are running a little smoother and took a look at the ScriptEngine DB and what packages were subscribed to which services with this query:


SELECT parserservice.*, package.*, parser.*
  FROM [ScriptEngine].[dbo].[ParserServicePackageDef] parserservicepackage
  inner join ScriptEngine.dbo.PackageDef package
  on parserservicepackage.PackageDefID = package.packagedefid
  inner join ScriptEngine.dbo.PackageDefParserDef packageparser
  on parserservicepackage.PackageDefID = packageparser.PackageDefID
  inner join ScriptEngine.dbo.ParserDef parser
  on packageparser.ParserDefID = parser.ParserDefID
  inner join ScriptEngine.dbo.parserservice parserservice
  on parserservice.parserserviceid = parserservicepackage.ParserServiceID
  and parserservice.serviceinstancename like '%Sync%'

Lo and freaking behold in our prod system there were 3 packages subscribed to the SyncParser service:  UM_USER, rnVitals and QuickEdit, and on our dev box there were none.  Log into Amalga Management Shell with Admin rights.  Run the following command:

SubscribeToPackages Amalga.ScriptEngine.Sync UM_USER
Bam.  Sync parser service starts working.  Dialog manager is able to checkout and create things.  This is just my experience.  Not sure why the dialog manager depends on the UM_USER package and this will probably be the next thing to investigate.


Thursday, October 13, 2011

Simple clojure string examples

One of the easiest places to delve into a new programming language I've always found to be string manipulation.  Not one of the most exciting aspects of most modern programming languages.  However so many atomic processes either boil down to manipulating strings or rely on it at one level or another.  Now in clojure as with lisp before it due to the fact that all data is code and all code is also data, clojure does not explicitly provide functions to manipulate strings.  Clojure's lazy evaluation means that all functions are data type agnostic, which takes some real getting used to coming from such strongly typed languages such as Java or C#.  Meaning that even though these examples use strings they work with any type of collection:  sets, lists, etc.  Here are some examples of some basic data transformations you can do with clojure:



01. First function example
  (first "test") = t
Return the first item in the collection.

02. Rest function example
  (rest "test") = (e s t)
Return all the items in the collection except the first element.

03. Str function example
  (str "test") = test
Return the items in the collection as a string.  If there is more than 1 item they are concatinated together, if a single item is passed then str is equivalent to the java toString() method.

04. Set function example
  (set "test") = #{e s t}
Return a set of the distinct items in the collection.

05. Subs function example
  (subs "test" 1) = est
Return a subset of the collection from the start or designated starting point (0 based) to either the collection's end or the designated ending point.

06. Nth function example
  (nth "test" 1) = e
Return the nth item in the collection.

07. Reverse function example
  (reverse "test") = (t s e t)
Return a collection with the original collection's items in reverse order.

08. Drop function example
  (drop 2 "test") = (s t)
Returns a collection of all but the first n item in the collection.

09. Drop-last function example
  (drop-last "test") = (t e s)
Drop last item in collection.

10. Count function example
  (count "test") = 4
Return the size of the collection.

11. Cons function example
  (cons "test" "01") = (test 0 1)
Cons function from lisp.  Constructs a new collection with "test" as the first item in the final collection and treats "01" as a collection separating its elements.

12. Concat function example
  (concat "test" "01") = (t e s t 0 1)
Returns a lazy sequence representing the concatenation of the first collection's elements with the elements of the second collection.

13. Lazy-cat function example
  (lazy-cat "test" "01") = (t e s t 0 1)
Returns a lazy sequence of the supplied collections.

14. Take function example
  (take 2 "test" = (t e)
Returns the first n items in the supplied collection.

15. Take-last function example
  (take-last 2 "test" = (t e)
Returns the last n items in the supplied collection.

16. Take-nth function example
  (take-nth 2 "test" = (t s)
Returns every nth item in the supplied collection.

See the clojure 1.3 api for full specifications:  http://clojure.github.com/clojure/

Tuesday, July 12, 2011

Converting VARBINARY to VARCHAR yields only opening character

Came across this interesting feature of SQL Server the other week. The system we're working on takes incoming text files and stores their contents in full as VARBINARY fields in order to maintain a complete collection of messages that have been uploaded to the system. Currently we're circumventing this default load process and loading directly into the VARBINARY columns from our external database.

DECLARE @foo NVARCHAR(3)
SET @foo = 'bar'
SELECT CONVERT(VARBINARY,@foo)

This yields the expected Unicode binary: 0x620061007200

However when unpacking these VARBINARY fields to VARCHARs we get the following:

SELECT CONVERT(VARCHAR,CONVERT(varbinary,@foo))

Yielding: 'b'

Did you see the error?

The problem arises when unknowingly boxing an NVARCHAR into a VARCHAR because you're unaware of what the initial datatype was before it got converted to binary. The variable @foo was originally an NVARCHAR that got converted to binary thus retaining the additional UTF-8 bytes. Now when unpacking that binary data into a VARCHAR expecting ASCII data we're only left with the opening character.

Say two different programmers wrote these two separate pieces. The first programmer loads NVARCHARs into the binary fields, and the second programmer keeps trying to extract these binary fields as VARCHARs only to find the data truncated down to a single character. Obviously the best thing to do in this case is to have the programmers communicate with each other in order to maintain data consistency. However if this proves impossible for whatever reason it may prove useful to check to make sure your fields aren't being inadvertently truncated when being unpacked from their binary fields.

tl;dr Just use NVARCHAR for everything and save yourself the headache

Sunday, February 6, 2011

Getting emacs's rgrep working in windows

In order to get many of the useful emacs utilities working in windows you'll need to install a UNIX emulator for windows. I chose to go with cygwin.

There are 3 different ways of grepping in emacs:
grep
lgrep (local grep - will only search the directory you're currently in)
rgrep (recursive grep - will search subdirectories)

Installing cygwin seemed to grant access to 2 types of grep: grep and lgrep. While rgrep continued to spout "Parameter format not correct" messages.


> rgrep -nH "meta" *.*

find . "(" -path "*/CVS" -o -path "*/.svn" -o -path "*/{arch}" -o -path "*/.hg" -o -path "*/_darcs" -o -path "*/.git" -o -path "*/.bzr" ")" -prune -o -type f "(" -iname "*.*" ")" -exec grep -i -nH -e "meta" {} /dev/null ";"
FIND: Parameter format not correct

Grep exited abnormally with code 2 at Sun Feb 06 23:12:00


It turns out that lgrep utilizes only the normal grep formatting. However, it appears that rgrep pipes its arguments to "find" in order to build the subdirectory tree where it runs grep on each folder individually. However emacs utilizes windows find by default and must be pointed to cygwin's find instead in order for rgrep's unix style arguments to work.

Please add the following line to your .emacs:
(setq find-program "C:\\path-to-cygwin\\bin\\find.exe")

*NOTE* You need to fully exit emacs and restart instead of merely running load-file ~/.emacs in order to overwrite many of the cached cygwin interfaces.

At this point rgrep sort of works. When running the same above search query I started receiving the following output (note how we are now running cygwin's find method:

> rgrep -nH "meta" *.*

C:\cygwin\bin\find.exe . "(" -path "*/CVS" -o -path "*/.svn" -o -path "*/{arch}" -o -path "*/.hg" -o -path "*/_darcs" -o -path "*/.git" -o -path "*/.bzr" ")" -prune -o -type f "(" -iname "*.*" ")" -exec grep -i -nH -e "meta" {} NUL ";"
/usr/bin/find: `grep': No such file or directory
/usr/bin/find: `grep': No such file or directory
/usr/bin/find: `grep': No such file or directory
.
.
.
etc

Emitting a "/usr/bin/find: `grep': No such file or directory" error for each unsuccessful search attempt.

After much searching I found a workaround here detailing how to pipe the warning message to "/dev/null" instead of "windows-null" in order to suppress the warning messages found in the grep output buffer.

Please add the following to your .emacs file:

;; Prevent issues with the Windows null device (NUL)
;; when using cygwin find with rgrep.
(defadvice grep-compute-defaults (around grep-compute-defaults-advice-null-device)
"Use cygwin's /dev/null as the null-device."
(let ((null-device "/dev/null"))
ad-do-it))
(ad-activate 'grep-compute-defaults)

This fixed rgrep for me. Nothing I seemed to do has gotten grep -r working however, and it still continues to only search the current directory. Supposedly installing ack will make all these problems go away but I haven't gotten it working quite yet. Hope this helps.

Thursday, April 24, 2008

The Problem with IDE's

I've always been a proponent of IDE's. I've always believed that the use of the correct IDE to solve a given problem increases productivity by vastly noticeable amounts.

My experience with IDE's in the past has been a tumultuous one. When first starting off in computer science we were introduced to such complex editing tools such as textpad and (shudder) notepad. These text editors gave way to an entire year writing C code in VI, which segued into using more robust editors such as Visual Studio and Eclipse. For in that infancy of IDE knowledge that was all the IDE's were and all they represented; a larger prettier looking text editor.

That was up until about 2 years ago when I took Software Engineering from Phillip Johnson. It was only then that the full scope of this Java language we had been using came into play. Now instead of dealing with five or ten classes we were dealing with hundreds of classes and thousands of method calls. Without the formal training with the Eclipse IDE, specifically tailored to the Java language, I would have been not only lost, but even worse, unproductive.

From that day forward I was sold on IDE's. Code completion and Intellisense were next to godliness, refactoring took just a few clicks, package structure, library imports, code reviews, Eclipse had it all. The future seemed to be a near infinite increase in productivity as I became ever more efficient in using eclipse's keyboard shortcuts and macros.

Then I started my first job in the software development industry, and about a week into it I had a conversation with a coworker which went something like:
Me: "Oh so what IDE do you guys typically use when you're working on projects?"
Coworker: "None."
Me: ".... what???"
Coworker: "I usually code in textpad or notepad and compile it with ant. Most people just pick a single text editor and get really good with it."
Me: ".... what???"

How could it possibly be that in a professional business environment there was no cohesion between developers and the tools that they used. Were they possibly ignorant about how much faster IDE's allow you to program? How was it even possible to be productive?

The IDE Divide, by Oliver Steele, makes the argument that the developer world is divided into two competing camps: The language mavens vs The tool mavens. His argument is basically thus; to the language mavens the real productivity increases occur when more powerful languages are introduced and all IDE's are functionally equivalent text editors. Conversely the tool maven would argue that all languages do functionally the same thing, implement the same methods, accomplish the same goals, and that the real productivity increase comes from the tools used to implement them. He goes on to argue that not only do these camps exist but that they are also mutually exclusive. That developers cannot (or in only the rarest of occasions) be both language and tool mavens simultaneously. This is due to the fact that learning either new developing tools or new languages makes it easier to learn the next developer tool or language that comes along, to the exclusion of the other. Therefore a positive feedback loop exists with respect to the individual camps, making it harder and harder to bridge the functional gap between the two.

Now Oliver Steele is quite obviously a language maven but his argument still stuns me. I've had the luck of having to migrate between two languages which are nearly identical (Java -> C#), but in the past I've noticed some problems when I venture outside of the languages I've formed some level of comfort with, let alone be left alone without my trusty eclipse editor. Luckily Visual Studio provides many of the feature that eclipse does and more.

Now I've seen Visual Studio developers be defined as "IDE Users" rather than software developers, which would be enough to enrage even the most open minded developer. But programming in immense languages like C# or Java is "clearly" (to me at least) easier and more efficiently done within an IDE. So many of the modern high level languages contain such immense API's and so many different libraries that it's seemingly impossible to find the method you're looking for within this giant pile of information.

Has programming become an intractable solution where one has to leaf through API's hundreds of pages long looking for a single method signature? Is it inevitable that the higher level programming languages will evolve to become more and more complex until they eventually become so obscure that they become essentially impossible to use without a specialized IDE?

I sometimes wonder if my coworker was right, are the more advanced IDE's actually useful or are IDE's really useless? Should we stay with the most barebones input possible? Is it even possible to proficiently code in gargantuan languages such as java or C# without the use of a proper IDE?

The alternative according to Oliver Steele (and its a compelling one) is to rely on languages to provide the functionality that we seek. Put the spare development effort into learning the features of new languages and how best to implement them into our projects.

Either way I spent my free time for the last few days finally deciphering how to use emacs. I know I'll be forced to use Visual Studio at the client job site I'm moving to tomorrow, but this article could possibly have changed my whole programming productivity philosophy (alliteration!). I should give it a few weeks to digest before I determine what IDE's really mean to me and how they figure into the grander scheme of being a complete developer.

Thursday, April 17, 2008

The Requirements vs the Analyst

I always wondered what the "Systems Analyst" job entailed because the job title just sounded so damn cool. Now that I've read Software Requirements by Karl Wiegers I have absolutely no envy for anyone with that job title, in fact I sympathize in the utmost with what must be one of the hardest jobs in the software engineering industry.

In his book Karl Wiegers goes into great depth and detail in an effort to outline the software requirements collection process, which many have touted as being the most important aspect of software engineering due to the fact that the cost in correcting flaws in the requirements left undetected scale exponentially later into the project. It goes into great detail breaking down the groups of users, differing techniques to elicit requirements from either users or managers, as well as the clarification and organization of these requirements into a comprehensible software requirements specification.

I've personally found requirements practice to be the absolute hardest part of software engineering. Not to say that working with other people is either undesirable for the average software engineer, but it is exhausting whenever the effort is to get a large number of people to agree on a single definition or interpretation of a business rule. The greatest tool that I found in Wiegers book was the 'education' concept. If time permits the ability to educate the users, managers, and all groups whom the finished product will be of value, is the greatest one has. Clear communication and education regarding the importance of precise requirements definition goes a long long way towards making the requirements gathering process bearable.

The ideas and processes I had in my mind whenever someone would mention "requirements gathering" involved a bunch of emails, hallway conversations, maybe a lunch or two, and possibly a meeting involving a few people. While Wiegers situations represent the absolute optimal situations given enough time and manpower, his vision of cooperation and communication between users and developers really would be a thing of beauty were anyone able to put it into practice.

Monday, March 31, 2008

Refactoring and Cyclomatic Complexity

"Refactoring" by Martin Fowler is a great book that gives formal definition to a huge mishmash of informal programming procedures and habits. It not only argues the benefits and "whys" of the whole refactoring game (which are often informally understood and championed by many), but also gives formal definition to the types of code decay most often occurring as well as formal definitions and names to the possible solutions.

Fowler lovingly refers to this code decay as "code pungency" or "bad smells in code" when encountered by a developer. However in 1976 Thomas McCabe coined the term "Cyclomatic Complexity" in a paper outlining a software metric used to analyze the complexity of programs. The metric measures the number of linearly independent paths through a given chunk of source code.

"Cyclomatic complexity is computed using a graph that describes the control flow of the program. The nodes of the graph correspond to the commands of a program. A directed edge connects two nodes if the second command might be executed immediately after the first command." [wikipedia]

Basically as the number of conditional statements in a program grows, so do the number of corresponding paths and therefore the graphical representation of the program's control flow. A program with no if/else statements has a single path. Add in one 'if' statement and the number of paths grows to two. One path if the 'if' returns false, and another if it returns true. Any function call including some sort of boolean assessment will fork the program's path, not to mention 'while' and 'switch/case' statements, ad infinitum.

The software metric of cyclomatic complexity serves as a baseline indicator for when a program needs refactoring. From a top level viewpoint the analysis is useless because all programs must include some degree of complexity in order to be useful. Where the measurement become useful is at the method level where methods including some 30 or more linearly independent paths are clearly marked as overly complex and should be scheduled for refactoring.

Several tools exist which do good jobs of measuring this metric:
[JavaNCSS]
[Dependency Finder]

One which includes an excellent visual output of the state of a project's complexity is Panopticode, which also provides visual output of your given level of code coverage as a bonus.

Example: The complexity state of the Cruise Control 2.6 project (hit refresh until it loads).

All in all refactoring is an extremely necessary concept for every programmer to understand, even those who practice it now stand to gain a great deal by understanding why they are doing what they do. I hope to find other refactoring tools to aid in the developmental process soon.