Showing posts with label databases. Show all posts
Showing posts with label databases. Show all posts

Wednesday, 27 June 2012

Persisting to Neo4j via Spring Data (or, "Aren't We Persistent?")

Hi gang!

Ok, I'm back with a new post, this time with a post about a couple quirks I ran into while implementing some test cases for Spring Data using Neo4j.  They're not bugs by any stretch; it's just new behaviour to get used to as you venture into the Spring Data world (which I'm loving, by the way).

During my copious amounts of downtime (that should be read while imagining me rolling my eyes so hard that I fall over backwards in my chair), I've been putting together a little playground for me to mess around with and play with Spring Data.

It's definitely evolving and changing as I change things up and try out new ideas, and I fully plan on sharing more about this in future posts.

For now, though, I'm just discussing a couple potential pitfalls newcomers to Spring Data might fall prey to (please pardon the prepositional phrase; I'm sure it won't be the last one).

Background
I've wanted to play with Spring Data a bit more seriously for some time now, so I started a few weeks ago and, I have to say, I'm loving every second of it.  (I'm already a huge Spring fan, and the annotations continue to make my life easier.)

My sandbox goes something like this: After having gone through the docs for Spring Data (especially "Good Relationships"), I thought I'd try out something similar for myself, borrowing the whole "store" concept (as it seems to me to be the best, first choice for implementing a graph database).

Instead of using the whole "movie store" concept, I switched to something a little different to avoid total code reuse (I do borrow some code from the link above but modify it an awful lot).

For any geeks around my age (or older), you will remember a certain computer software retailer called Babbage's.  I have fond memories of begging my parents to go into the store every time we passed one, which wasn't often (at least I don't think it was...).  GameStop Corporation went on to purchase Babbage's (and EB Games, and a bunch of other software retailers), so you're unlikely to see a Babbage's by that name.

With that short trip down memory lane finished (more like memory cul de sac), I decided to model my domain after the concept of a software retailer.  My store will cleverly enough be called Von Neumann's (any CS major and most geeks out there are currently groaning at that joke).

Domain Model
Currently, the domain model consists of the following:



Even looking at the UML diagram above, we can see that it's based on a graph model (can you pick out the entities and/or the relationship(s)?).

Test Cases' Setup
After setting up the relevant project (which I did as a Maven project), I set forth creating some tasks using JUnit.  Before creating the actual test cases, I needed to make sure I had my testing context setup.  I also needed away to ensure that any data being persisted was wiped clean after each run.

Fortunately, instead of having to create such functionality for my project, I learned that Neo4j already has a handy solution!  The ImpermanentGraphDatabase.  This little gem can be found in the Neo4j kernel.  Specifically, I added these lines to my POM (you can see the specific version I'm using, too):


<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-kernel</artifactId>
<version>1.8.M03</version>
</dependency>


...and then adding the following line to my testing context:

<bean id="graphDBService" class="org.neo4j.test.ImpermanentGraphDatabase" destroy-method="shutdown"/>

And presto!  A suitable testing graph database for my test cases!

(Warning: I am using the latest version that I found worked best for me and is compatible with all my other dependencies.  ImpermanentGraphDatabase as available in earlier versions of Neo4j, as well.)

It should also be noted that I make use of both the Neo4j repositories interfaces AND the Neo4jOperations class for persisting and retrieval.

Another note is that I've made the entire test class @Transactional.

Test Cases Proper
I'll list two of them below and a couple of the quirks I noticed.

Ensuring a Customer Can Make a Purchase
This test case consists of creating a Customer object, a couple Game objects, and making sure that Purchases can be created, persisted and retrieved (along with the associated entities). 


 
 @Test
 public void customerCanMakePurchases()
 {
  // setup our game constants
  final int QTY = 1;
  final String GAME_TITLE = "Space Weasel 3.5";
  final String GAME_TITLE_2 = "The Space Testing Game";
  final String GAME_DESC = "Rodent fun in space!";
  final String GAME_DESC_2 = "Tests in space!";
  final int STOCK_QTY = 10;
  final float PRICE = 59.99f;
  
  // setup our customer constants
  final String FIRST_NAME = "Edgar";
  final String LAST_NAME = "Neubauer";
  
  // create our customer for this test
  Customer customer1 = new Customer();
  
  // set the customer's properties (NOTE: "firstName" is an indexed property in the Customer entity, but "lastName" is not!)
  customer1.setFirstName(FIRST_NAME);
  customer1.setLastName(LAST_NAME);

  // create our games for this test
  Stock game1 = new Game(GAME_TITLE, GAME_DESC, STOCK_QTY, PRICE);
  Stock game2 = new Game(GAME_TITLE_2, GAME_DESC_2, STOCK_QTY + 5, PRICE + 5);

First, do the setup.  (And, for the sake of brevity, I'm leaving out the annotated entities.)

Nothing strange going on here--just creating two games and a single customer.  It is worth noting (for later on) that "firstName" is an indexed property of the Customer entity/node.  This means that it is searchable (also recall that Neo4j's default indexing engine is Lucene).


It had to be done.
The games we've chosen are clearly AAA-title games.  These tests should be interesting.


  // save entities BEFORE saving the relationships!
  template.save(game1);
  template.save(game2);
  template.save(customer1);

  // make those purchases! Support our test economy!
  // (NOTE: "makePurchase" actually uses the "template" parameter to persist the relationship, so no need to do it again)
  Purchase p1 = customer1.makePurchase(template, game1, QTY);  
  Purchase p2 = customer1.makePurchase(template, game2, QTY);

Above, we make sure to persist the 2 games and single customer.  We also do this prior to persisting any relationships.  This is necessary.  In this case, I make use of an instance variable called "template" which is actually an instance of 
Neo4jOperations.  This is one way of accessing the necessary persistence/retrieval functionality we need.

We then create 2 Purchase objects/relationships (Purchase is actually a relationship entity).  It is also worth noting that, instead of using the "template" object to persist the relationships, I've followed the Neo4j tutorial book "Good Relationships" and attempted another way of doing persistence, i.e. by passing the "template" object into the necessary method and having the method (in this case makePurchase) actually do the persisting of the newly-created Purchase.

Again, both "game1" and "game2" need to be persisted prior to persisting any relationships between them.

Still with me?


  // retrieve the customer
  Customer customer1Found = this.customerRepository.findByPropertyValue("firstName", FIRST_NAME);
  
  //
  // Tests
  //
  
  // can we find/retrieve the customer?
  assertNotNull("Unable to find customer.", customer1Found);
  
  // can we find the specific customer for which we are looking?
  assertEquals("Returned customer but not the one searched for.", FIRST_NAME, customer1Found.getFirstName());
  
  // does the retrieved customer have its non-indexed properties returned, as well?
  assertEquals("Returned customer doesn't have non-indexed properties returned.", LAST_NAME, customer1Found.getLastName());

  // retrieve the customer's purchases
  // (NOTE: We case as a Collection just to make checking the number of puchases easier)  
  Iterable purchasesIt = customer1Found.getPurchases();
  Collection purchases = IteratorUtil.asCollection(purchasesIt);
  
  // do we have the correct number of purchases?
  assertEquals("Number of purchases do not match.", 2, purchases.size());

So now we get to some actual testing.

The tests above are all straightforward.  We ensure the following:
  1. We can retrieve a persisted node, specifically via an indexed property.
  2. We can retrieve the correct persisted node for which we are searching.
  3. We can view non-indexed properties from the retrieved node.
  4. We can retrieve the correct number of relationships of the retrieved node.
As noted in Section 9.3 of "Good Relationships", we use Iterable for those node properties that are collections and are to be left as read-only, and Collection or Set for those collections that can be modified.



  // go through the actual purchases...
  Iterator purchIt = purchasesIt.iterator();
  Purchase purchase1 = purchIt.next();
  
  // retrieving objects via Spring Data pulls lazily by default; for eager mapping, use @Fetch (but be forewarned!)
  // ...this means we have to use the fetch() method to finish loading related objects
  Stock s1 = template.fetch(purchase1.getItem());

What if we want to view a node's related nodes' data?


By default, Spring Data loads an entity's relationships lazily, which makes perfect sense (just picture how much memory would be needed if you had a very large, highly connected graph).  Also remember that there are implicit relationships between entities if an entity is contained as a property of another entity.


(Courtesy of Paramount Picture's Forrest Gump)
"Mama said eager loading is like a box of chocolates: You never know what you're gonna get."
Well, at least the chocolates had an easily-determined, finite number in the box...

It is possible to have an eager retrieval by using the @Fetch annotation (be warned, though, that it will currently only work, by default, on node entities and collections of relationships that are based on Collection, Set, or Iterable; Spring Data may expand that in later releases, but I believe you can extend the mappings to work with other classes, if you so desire).

So, with our lazily-loaded relationships, we can use "template"'s fetch method to finish loading in the missing data.  It's as simple as that!  Anyone familiar with ORM will get this immediately.


  // can we retrieve our first purchase successfully w/ its details?
  assertEquals("Purchased item not persisted properly.", GAME_TITLE, s1.getTitle());

  purchase1 = purchIt.next();  
  Stock s2 = template.fetch(purchase1.getItem());
  
  // can we retrieve our second purchase successfully w/ its details?
  assertEquals("Purchased item not persisted properly.", GAME_TITLE_2, s2.getTitle());
  
  // if we're here, then all test ran succesfully.  Hooray!
 }

Above, we run a couple more tests to ensure that we can, in fact, retrieve and view lazily-loaded objects from Neo4j.

Nothing to it!

Making Friends the Easy Way: By Creating Them!
For these tests, we're going to have a look at something a bit more social, i.e. customers befriending other customers (how Utopian!).  I suppose we could make them "rivals" or "enemies", but that's a bit too sinister for this blog (for now...).

Anyway, I have prior to this test method a setup method (annotated with the @Before JUnit annotation) that creates 5 customers (if you're interested, I persist them using a CustomerRespoitory I created by extending the GraphRepository and RelationshipOperationsRepository interfaces).


 @Test
 public void customerFriends()
 {
  // add friends
  c1.addFriend(c2);
  c1.addFriend(c3);
  c1.addFriend(c4);
  c1.addFriend(c5);

  // be careful! setting a "Direction.BOTH" relationship in one node entity will have the ENTIRE relationship saved (*including the adjoining node*) when saving just ONE of the two entities!
  // ...if you save both, Neo4j will remove the duplication (and you'll be left wondering why c1 is a friend of c2, but not vice versa)
  
  // save c1's friends
  customerRepository.save(c1);

In the code above, we have the customer "c1" make friends with the other customers (he's a social butterfly).

Now, perhaps the most important part of this whole blog is shown here (and below).  It has to do with relationships, specifically those that are annotated as being "Direction.BOTH".

As you can see from the comments in the code above, we need to be careful about how we create relationships between nodes and save them.  If we were to create the relationship between, say "c1" and "c2", and then persist each node (and therefore the relationships, which the customer repository will handle), we would notice that the relationships have gone awry, and that the duplicate relationship from "c2" has been removed.

So, what we're going to do is the following (keeping in mind that "c1" and "c2" have already been persisted in the setup method):

  1. Persist "c1" (and thereby its friendship to "c2").
  2. Retrieve "c2".
  3. Add any other friends' relationships to the retrieved "c2" (while not befriending back to "c1").
  4. Persist "c2" (and thereby its friendships to those added in Step 3).

Step 1 is done above.


  // we can't just continue to add friends to this.c2, as once we try to save this.c2, it'll remove the duplicate relationship between c1 and c2.
  // ...so, to get around this, we retrieve the persisted object from the DB
  Customer c2Found = customerRepository.findByPropertyValue("lastName", C2_LNAME);
  c2Found.addFriend(c3);
  c2Found.addFriend(c4);
  c2Found.addFriend(c5);

  // save c2's friends, which will preserve the existing relationship with c1! Old friends can remain friends!
  customerRepository.save(c2Found);

As you can see above, we finish the remaining steps (2 through 4).

Again, note that we DO NOT create a reciprocal relationship from "c2" to "c1".  (One would hope a friendship relationship would be reciprocal; unless you have stalkers or something...)


This would totally help.

All that's left now is to run some tests to ensure that our friends have remained friends throughout all this persisting!


  // retrieve c1 for some tests
  Customer c1Found = customerRepository.findByPropertyValue("lastName", C1_LNAME);
  
  Iterable c1Friends = c1Found.getFriends();
  Collection c1FriendsSet = IteratorUtil.asCollection(c1Friends);
  Iterator custIt = c1Friends.iterator();
  
  int numFriends = 0;
  
  // let's make sure all of c1's friends were retrieved
  assertTrue("Friend not found.", c1FriendsSet.containsAll(IteratorUtil.asCollection(c1.getFriends())));
  
  // let's also make sure that c1 and c2 are still buds specifically (these two are inseparable...you should see them at ComicCon!)
  assertTrue("Friend not found.", c1FriendsSet.contains(c2));
  
  // let's make sure the exact number of friends returned is correct 
  while (custIt.hasNext())
  {   
   custIt.next();
   numFriends++;
  } // while
  
  assertEquals("Number of friends returned incorrect.", 4, numFriends);

  // if we're here, all is well! Huzzah!
}

Above, as in the first test, we make sure that all of the friendships have been properly preserved, both from "c1"'s and "c2"'s perspective.

Conclusion
In this post, we have seen the basics of persisting with Spring Data and a couple of the quirks I ran into.  These are documented within the Spring Data documentation, but it never hurts to bring these little nuances out into the light even further.

We also saw that the ImpermanentGraphDatabase is available to us through the Neo4j kernel which is a wonderful tool for implementing test cases with quick setup and teardown--no needing to write initializers and cleaners for a Neo4j installation!

So there we have it!  A first pass through persisting with Spring Data and implementing some unit tests using Neo4j and JUnit.

If anyone has any questions or I've made a mistake, please feel free to leave feedback.

We'll see you on the next post!

Wednesday, 15 February 2012

On the Subject of NoSQL (and a bit about graph databases)

Pretty formal-sounding title, yeah?

(I'm likely just suffering from title-writers' block.)

So, before I dive head-long into an actual graph database, it's probably a good idea to briefly discuss what makes a graph database a graph database.

RDBMS

For the past 30+ years, the world of databases has been primarily dominated by the colossus that is relational databases (i.e. RDBMS, or Relational DataBase Management System).  RDBMSes are well-known and well-studied, but it suffices to say that they can be used to model almost any kind of information (that is, RDBMS can be used to model a broad, general set of data).

I'm going to assume that the reader is familiar with basic RDBMS concepts like columns, rows and tables.  Information in an RDBMS is grouped into similar entities that can have relationships between them.  In this way, we can model just about any situation in this single kind of database.  For example, you can model everything from a school (teachers, classrooms, students, schedules, etc.) to an online business (products, orders, inventory, etc.).

Because each table consists of rows and each row is made up of columns (columns representing the types of information you want to store), we know what kind of data to expect in each table and database based on its schema.  This is great for ensuring that you don't try to save a product's name when it's expecting the product's price.

RDBMSes are also good for both looking up information as well as handling the storage of data.  The concept of transactions is important; just as businesses need to deal with transactions every day, so to does a database that is used to enter in data of a transactional nature (e.g. payments, orders).

Another important feature of any RDBMS is the ability to query the data.  SQL (Structured Querying Language) is almost as old as RDBMSes themselves and is a powerful tool for looking up and modifying data in an RDBMS.  It might not be the most efficient tool on its own given the potential complexities of how a specific database is laid out (see: query analyzers and optimizers), but it can be tuned to be a very powerful tool with the right indices setup and the right query.  (SQL can also be prepared/compiled in some cases, but we won't get into that.)

So why use one of these so-called NoSQL databases?

NoSQL

Without going into too much detail about NoSQL (that's another post or ten on its own), NoSQL is really better off being called NoREL (i.e. non-relational model, though that's more in the traditional RDBMS sense).

Whereas RDBMSes are geared towards being a general solution for almost any model, it does everything fairly well but requires specialized tuning in order to make it run really well, and it comes at the expense of some areas being better and others becoming "worse" as a result.  For example, it's difficult to make a database that is tuned for efficiently handling a high volume of transactions be efficient at also handling speedy look-ups and reads for data.

NoSQL databases provide specializations that RDBMS systems can't provide.  They come out of the box ready to be super-good at one or two areas, but not to so great in others.  Think of them as pre-tuned databases, and one size definitely does not fit all.

NoSQL databases range in types from document-stores (e.g. CouchDB) to key-value stores (e.g. MongoDB) to graph databases (e.g. Neo4j), to name but a few types.  (A decent breakdown of NoSQL database types can be found here and here.)

You could likely tune an RDBMS to be quite good at a number of things, but it's a bit of a pain (trying to setup and tune proper indices is a painful and involved process).

This is partly why I disagree wholly with those who say NoSQL heralds the death of RDBMS.  Quite the contrary: I see NoSQL databases as being an excellent complement to RDBMS.

Another advantage of NoSQL databases is the fact that most of them are schema-less; that is, they can store arbitrary information without the need to structure it.  This can be very powerful when modelling heterogeneous information in a database.  It can also allow for the evolution of your data models without the need to completely overhaul and change your database (anyone who's tried to do that before with an in-production database knows just how painful that is).

NoSQL databases also tend to scale very, very well, often times easier than clustering together an RDBMS-based solution.

One point worth mentioning is the fact that RDBMSes are traditionally known as what is called ACID compliant.  ACID (which stands for Atomicity, Consistency, Integrity, Durability) is a very important concept for databases that handle transactional information (and most businesses do).  A discussion on ACID is well outside the scope of this post, but, ACID compliancy is commonly lacking in most NoSQL databases (most of them subscribe to the principle of Eventual Consistency).  This is definitely worth noting and keeping in mind when choosing a NoSQL database to use.  A great example of an exception to this is the fact that Neo4j (a graph database) is, in fact, ACID compliant.

(If anyone wants an actual discussion on ACID vs. Eventual Consistency and why it's important, let me know and I'll see about putting an article together.)

Graph Databases

As far as graph databases are concerned (took me long enough to get here), I strongly suggest going to this link and checking out "What is a graph database?" and "Comparing Neo4j" tabs on the page.  It does a wonderful job of explaining what a graph database actually is (big surprise) and how graph databases relate to other types of databases (both NoSQL and RDBMS).

While you do need to typically index nodes and relationships for searching (e.g. full text searches over properties), strictly speaking a graph database is one that provides "index-free adjancency" (source: http://en.wikipedia.org/wiki/Graph_database).  This means that each element (i.e. node) has a link to its adjacent elements to follow--no index look-ups are necessary.

Graph databases are a great way of representing graphs (remember those things with nodes and relationships?).  Graphs in a graph database typically consist of arbitrary nodes and relationships.  Each node and relationship can have assigned to it an arbitrary number of properties.  Properties are simply key-value pairs of information (e.g. "Name" = "Joe" and "Age" = 30).

So you can easily represent a family tree, a network diagram, or even a social network with a graph database.  For example, think of two nodes, each one representing a friend at work (call them Jason and Scott), and a relationship between them (representing their friendship).  So, each node would have properties like, "Name"="Jason", "Age"=30, etc.  The relationship between them could be labelled "KNOWS", and that relationship could have properties like "Since"="01/01/2001" and "At"="Acme Inc.".  All of a sudden, we now have a way to track friends, find out who knows whom, when they met, and where they met.

Now let's say somewhere down the line we learn something else about each person; say, Jason's birthday.  It's very easy to add a new property to Jason's node.

We begin to see the power of graph databases very quickly.

We can use graph databases in many ways, including (but definitely not limited to):
  • Recommend products to buy based on a user's purchase history (follow a graph from a product someone has bought back through another user that's bought the same product and then on to another product that other user has also purchased).
  • Find out just how popular someone is (look at the number of relationships that person's node has).
  • See what geographic locations have the most users in it.
  • Find out if you know the CEO at a powerful company through a friend (you can always use more friends!).
This is why sites like Amazon and LinkedIn are so powerful.  Think about how they might use a graph database.

Ok, that's enough for now.  I keep thinking I can write short posts on this stuff, but, there's just so much!

As always, I'm sure there's plenty more I haven't covered, but if you'd like to see anything else put up here (or clarified), just let me know.  I'm always looking for ways to improve how I organize and present this information.

Peas!


Neo4j: First Blood

That's one serious-sounding title.

This post is about my first foray into the graph database world.  I chose as my first victim/offering Neo4j.  I'll likely end up writing more than a few articles about this particular database, but, I thought I'd start with the basics, including the following:
  • Download
  • Installation
  • Configuration
  • Poking around
(And yes, "poking around" is a sanctioned technical term.)

So, a bit about Neo4j to start!
  • It's been around since 2007.
  • NUMBER ONE SELLING POINT FOR ME: It's ACID compliant!  Not too many NoSQL engines that I've seen (yet) are, though for various reasons that are well outside the scope of this post.
  • As you may have guessed, it's primarily meant for integration with Java.
  • It can also be integrated with Spring (a big plus, if you ask me) via Spring Data (POJO development FTW!).
  • Its API is REST-based, and so can be utilized by just about any platform (though you'll likely have to write your own wrapper, unless you can find one out there in the open source world).
  • There are some ready-made wrappers for some platforms available, such as Python and Ruby.
  • It's available for Windows, MacOS and Linux-based OSes.
  • It's available in both 32-bit and 64-bit for Windows and Linux.
  • It's available in 3 versions, including the Community version, the Advanced version, and the Enterprise version.  As you'd expect, the Community version is open source available under GPL (the other versions are covered under AGPL).
  • It comes ready-to-run with a version of the web/app server Jetty.
  • It comes with a built-in web admin console (hence the need for Jetty).
  • Following in the NoSQL tradition, it scales very well for Big Data.\
  • Its name lends itself well to any number of The Matrix jokes.
I strongly suggest going to their website (www.neo4j.org) to do a little research of your own.

Download

Given that I'm just looking to get my feet wet with Neo4j, I downloaded the Neo4j v1.6 Community Edition 64-bit Linux package from my ready-made VM (coincidentally named Morpheus) running CentOS (sorry Windows users).  Read: I can't be bothered downloading the source and compiling it.  Note that Java 1.6+ is required; a complete set of requirements can be found here.

The archive is only about 37MB (give or take) and so completed relatively quickly over my bonded DSL connection.

Installation

After un-tarring the package and moving it into an appropriate directory (I'm a sucker for /etc) and starting the Neo4j server from the command line via bin/neo4j start (don't worry; there's a README.txt in the root of the installation directory that has all the quickstart instructions in it), I was ready to rock!

(I should note that I did get a couple warnings, shown below, but it hasn't seemed to have affected anything just yet, likely given how small my current graph is.

WARNING: Detected a limit of 1024 for maximum open files, while a minimum value of 40000 is recommended.
WARNING: Problems with the operation of the server may occur. Please refer to the Neo4j manual regarding lifting this limitation.
)

Or so I thought.

Configuration

If there's one bone of contention I have with Neo4j, it's that finding the appropriate (and up-to-date) documentation for the config files takes a bit of digging (it's not impossible by any stretch, though).

As I quickly found out, trying to access the web admin console that comes with Neo4j (very handy, I must say) outside of localhost is a non-starter out of the box.

Did I pack it in for the day and go back to flipping through Steam for cheap games?  No!  I did some "research".

Here's the solution: In order to get Neo4j's web admin to work from somewhere outside of localhost, change the neo4j-server.properties in the install directory's /conf directory (go figure).

Commented out towards the top of the file is the property org.neo4j.server.webserver.adddress".  Uncomment it and change it to the IP you want to bind the server to (the property does note that there are security concerns to consider, so, you may want to consult the Neo4j documentation before doing this).  

You can also change other settings in this file, e.g. getting it to work over HTTPS, changing the default ports for each, etc.

(Note: The web admin defaults to running over HTTP on port 7474 and over HTTPS over 7473.)

So, after making the change to the appropriate IP and restarting the Neo4j server, I tried pointing my browser back at the Neo4j web admin.

Success!


Poking Around

Without going in to too much detail (I'll likely do that in subsequent posts), the Neo4j web admin has 5 distinct sections to help manage your installation:
  1. Dashboard
  2. Data browser
  3. Console
  4. Server info
  5. Index manager
Each one is fairly self explanatory.

The dashboard provides at-a-glance information about your server over a specified time line, such as the total number of nodes, properties, relationships and relationship types.

The data browser allows you to perform basic CRUD operations via a GUI.  You can also perform look-ups (consult the Help icon immediately to the right of the search button for more details on exactly what you can search for).  In other words, you can create a graph right then and there.

You can also flip the view to a graphical representation of the current graph and manipulate it (via click-and-drag) directly.  This is perhaps the coolest part of the web admin console (hey, we all like cool features!).


That's some serious badassery right there.

Next we have the console.  This is a great way to get familiar with the languages used to query Neo4j, including HTTP (i.e. accessing the REST calls), Gremlin (a Groovy-based querying language becoming common across multiple graph databases; it seems to be mainly for those coming from a math/graph background), and Cypher (Neo4j's own querying language; it seems to be mainly for those coming more from an SQL background).  

A quick note: At the time of this writing, Cypher only allows for read-only queries, whereas Gremlin allows for both reading and writing.

Next up, server info.  This is just a way to view (read: read-only) the server's configuration information.  No biggie.

Finally, we have the index manager.  Now, this is something I'm sure I'll be getting into a lot more as time goes on.  It's worth noting that Neo4j is built using the Lucene project for indexing, so this is very promising (especially for those familiar with Lucene and/or Solr). This makes a great deal of sense given the concept of properties for each node (full text search is going to be very important).

Regardless, you can create and manage indices for both nodes and relationships here.

So there we have it: My first venture into graph databases.  I'll admit I picked Neo4j first based on my initial research into graph databases.  It does seem to be the most popular graph database at the moment, so I look forward to seeing what it can do.

In subsequent posts, I'll be monkeying around with the querying languages, creating and modifying graphs, messing around with indices, and all kinds of other good stuff.

I hope some of you out there found this somewhat useful/informative/cool.  Well, I know I found it cool, but then I always was kind of odd...

Until next time, when we'll go Graph to the Future!  (Sorry, couldn't go an entire post without making at least one movie-based pun.)

Tuesday, 14 February 2012

So, You Want to Be a Grapher?

I've managed to resist the urge to setup a blog, until now.  A good friend and colleague of mine convinced me to do this based on a discussion we had over a beer the other night (and, let's face it: that's always the best place to have such ideas).  So, Martin, thanks for that.  I think.

So, the point of this blog, you wonder anxiously?  What a great question for a segue into an introduction!

I've been working in the IT field as a software engineer for some time and am currently the VP, Technology for a downtown-Toronto based software development firm.  As such, it behooves me (what a great term) to at least try to keep up-to-date with emerging technologies, especially as they mature.

To that end, as of late, I've become fascinated with the whole NoSQL paradigm.  Having spent most of my professional career dealing with RDBMSes, I was curious as to how the whole Big Data notion fit into things.  Browsing through the myriad niches of NoSQL--from document-based to key-value based--and leaning more about the whole movement along the way, I came across one particular type of NoSQL database that really got me glued to the ceiling.

Graph-based databases.

Coming from a background in not just computer science and software engineering, but mathematics as well (I attended the University of Waterloo up here in Ontario, Canada), the graph paradigm spoke volumes to me.

Sure, my math as it pertains to graphs may be a bit rusty, but it's something I clearly remember being rather interested in (should have taken more graph theory courses...).

Even more exciting is the fact that such databases existed.  For those of us who understand (or at least know about) graphs, I think it's safe to say that we can all appreciate the representation of social networks, semantics, and other relationship-driven data, as graphs.

What hit me like a tonne of bricks (Lego or otherwise) was that graph-based databases have actually existed for some time.  How long, exactly, I'm not 100% sure yet (as an example, one such database is neo4j which has been around since 2007).

The fact that highly-connected data could be so easily and directly represented in technology (along with the above) is what drove me to start digging deeper.  Such data exists in abundance around the web (and elsewhere!).

So began my adventure into the realm of graph-based databases.

"What do you hope to accomplish with this blog?  What are your goals?"  Another astute question; one that I have anticipated to some degree:

  • As much as I hope to inform and educate, this blog is as much to serve as a record of my journey into graph-based databases.  I'm hoping that as I continue to learn that others may hopefully glean some knowledge/insight from my posts (however little or much that may be).
  • As stated above (I'll repeat it for the sake of being explicit), I hope to educate and inform people as the world of graph-based databases expands and matures.
  • To explore graph-based databases and their related concepts.  This may include information on other aspects of NoSQL, or even deeper dives into graph theory.
  • To give a base from which to derive a basic understanding of graphs and their databases.
I think that's about sufficient for now.  Should I need to revisit these goals, I will do so when the time comes.

If anyone actually reads these posts, I greatly welcome feedback and comments.  This blog may not be everyone's cup of tea, but, I'll take that chance.  Let's try to keep the comments at least somewhat constructive (I'm sure some out there will pick apart things that I may get wrong, but I fully expect and welcome such criticism and corrections).

For the time being, I'll link to any sources from the web that I use.  If I miss any or if people feel that I'm not citing enough, please let me know.  It is not my intention to plagiarize anyone's work as I know developing it comes with no small effort; rather, it is my intention to aggregate and disseminate knowledge wherever possible.

Ok, I think this post is long-winded enough.  The frequency with which I post remains to be seen.  While I'm sure most won't exactly be waiting with bated breath, maybe they will!

With that, I look forward to posting more soon!  Back to the grindstone!


(And Happy Valentine's Day to all of you out there!)