Tag: XML

  • Convert XML to JSON using XSLT

    Convert XML to JSON using XSLT

    With increasing use of separate services on the same data, the need for portable data formats aroused. XML was one of the first widely used, but lately JSON is blooming.

    I don’t have a particular bias here, both serve well in appropriate environment, although JSON carrying the same data could result in about 20% size reduction.

    So can they be interchangeable? Just recently, I needed to convert XML data into a JSON format for easier consumption on the client.

    The fastest and (sometimes) easiest way to process XML to another format is XSLT.

    (more…)
  • LINQ to XML

    Seems that LINQ to XML does not get near as much attention as LINQ to SQL, but that’s a shame since there is a lot going on here too. The big improvements are in the XML document navigation, working with namespaces and document construction.

    (more…)
  • Matchpoint

    Matchpoint

    During 2007 and 2008 I worked on a document analysis tool called “Matchpoint”.

    The idea is to parse the document by identifying content blocks and then find certain keywords within the context. The document is tagged based on the found information.

    I did the software architecture first, creating the concepts, entities and relations,and identifying crucial parts of the system.

    The heart of the system is the parsing engine that identifies segments of document, for example education, experience, and so on. All the permutations of the segments are used, and the one that matches the most segments is selected for further analysis. Each of the recognized segments is then searched for the keywords. Each keyword has appropriate tags assigned, and this way the document is in the end tagged.

    Since the algorithm has to analyze documents in different languages, using semantic algorithms seemed a bit too complicated, so I went with regular expressions.

    The documents can be emailed or uploaded by FTP to the web server, where is a Windows service monitoring configured folder. A .NET console application is then run to convert document to plain text using IFilters, and then to run the analysis, and upload the data to the Microsoft SQL Server database in the end.

    Users can use a web application built on ASP.NET Web Forms to search and view indexed documents.

  • Adverto Mystery Shopping

    Adverto Mystery Shopping

    In the winter of 2008, I collaborated with my old friend Goran Petrović, to create a website for his company “Adverto Mystery Shopping”.

    We brainstormed the information architecture, and afterward I created a rudimentary CMS to maintain the content.

    As he was using an application for his business that was written using ASP.NET 1.1, it was a requirement for the web site as well. The markup is XHTML compatible as much as ASP.NET 1.1 WebForms allow, styled using CSS, and interaction improved using JavaScript and MooTools.

    You can have a look in more detail at www.adverto-ms.rs.

  • Car trading B2B

    Car trading B2B

    During 2oo8 I worked on the vehicle auctioning platform for a Swiss company.

    Frequent discussions with my close associate Saša Klačar, who did the requirements collection and communication with the client, resulted in well documented software architecture.

    After defining the system entities and relations, I managed a team of experienced developers went on to implement the system using ASP.NET Web Forms for the front-end, .NET, C# and SubSonic for the middle tier and Microsoft SQL Server as the back-end.

    I have also designed the database, scripted SubSonic, implemented a part of DAL, and created a Windows service for XML data imports from an external source.

    Following the loose Agile process, we have first released a user interface mock-up, which triggered several system redesigns and resulted in better specification in the following releases.

    The better part of the project was done, but it was at a point stopped by the client, even as they were very satisfied with our results, but it was caused by the company’s internal problems.

  • GTECH (formerly Finsoft)

    GTECH (formerly Finsoft)

    Finsoft is an international company producing large scale software systems. Main line products are dealing with sports betting specific demands, real-time transactions and information management.

    During my engagement, the company was acquired by the GTECH Corporation.

    I worked as a senior analyst programmer on web-oriented products – sports betting, affiliate programs, real-time data usage, payment services integration.

    For more details about the company, have a look at Finsoft and GTECH portfolios.

  • Quiz game

    Quiz game

    During autumn of 2004 I worked in a team to create a quiz game for the Zepter company. The idea was that the game presenter is leading the game, and players must answer a set of questions, each question within a time limit.

    Vladimir Stefanović was in charge of the design it and Neven Jovanović made the back end application.

    My part of the job was to design system entities and relations, and develop the ActionScript application.

    After an introductory screen, participant chooses a set of questions. Previously opened sets are disabled. After that, the game presenter starts with one participant. The participant must choose an answer within the time limit. Each question has several predefined answers, and a time limit to answer. Additionally, an image can be shown for any question.

    All game attributes are configurable via XML – questions and answers, images, time spans, sets of question, etc.

  • Currency Calculator

    In close collaboration with Teletrader’s web designer Vladimir Stefanović we have created “Currency Calculator” at the beginning of year 2003.

    With this useful tool you can convert amount from one currency to another. It is working with latest currency rates retrieved from web service, so you have to have active internet connection when first starting it, and if you like to refresh the rates.

  • Spider

    During 2003, I was asked to create a interesting tool, a spider to walk the web pages and extract data.

    I have made an analysis of how to achieve this, and implemented it fully. My focus was to parse HTML, collect the HTTP request parameters and values, and extract the data.

    The tool can use a page with links as well as a form as a source to create possible request parameter value combinations. Then, the links within the each page will be located and parsed to append to the list of pages to be processed.

    Configurations and results are saved as XML files. I also made a viewer for the results which can be sorted on any attribute.