• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    Testing to defend against nginx add_header surprises

    Jon Jensen

    By Jon Jensen
    May 29, 2020

    Cute calico cat perched securely upon a trepidatious shoe

    These days when hosting websites it is common to configure the web server to send several HTTP response headers with every single request for security purposes.

    For example, using the nginx web server we may add these directives to our http configuration scope to apply to everything served, or to specific server configuration scopes to apply only to particular websites we serve:

    add_header Strict-Transport-Security max-age=2592000 always;
    add_header X-Content-Type-Options    nosniff         always;
    

    (See HTTP Strict Transport Security and X-Content-Type-Options at MDN for details about these two particular headers.)

    The surprise (problem)

    Once upon a time I ran into a case where nginx usually added the expected HTTP response headers, but later appeared to be inconsistent and sometimes did not. This is distressing!

    Troubleshooting leads to the (re-)discovery that add_header directives are not always additive throughout the configuration as one would expect, and as every other server I can think of typically does.

    If you define your add_header directives in the http block and then use an add_header directive in a server block, those from the http block will disappear.

    If you define …


    sysadmin nginx security javascript nodejs testing

    Implementing SummAE neural text summarization with a denoising auto-encoder

    Kamil Ciemniewski

    By Kamil Ciemniewski
    May 28, 2020

    Book open on lawn with dandelions

    If there’s any problem space in machine learning, with no shortage of (unlabelled) data to train on, it’s easily natural language processing (NLP).

    In this article, I’d like to take on the challenge of taking a paper that came from Google Research in late 2019 and implementing it. It’s going to be a fun trip into the world of neural text summarization. We’re going to go through the basics, the coding, and then we’ll look at what the results actually are in the end.

    The paper we’re going to implement here is: Peter J. Liu, Yu-An Chung, Jie Ren (2019) SummAE: Zero-Shot Abstractive Text Summarization using Length-Agnostic Auto-Encoders.

    Here’s the paper’s abstract:

    We propose an end-to-end neural model for zero-shot abstractive text summarization of paragraphs, and introduce a benchmark task, ROCSumm, based on ROCStories, a subset for which we collected human summaries. In this task, five-sentence stories (paragraphs) are summarized with one sentence, using human summaries only for evaluation. We show results for extractive and human baselines to demonstrate a large abstractive gap in performance. Our model, SummAE, consists of a denoising auto-encoder that embeds sentences and …


    python machine-learning artificial-intelligence natural-language-processing

    Designing flexible CI pipelines with Jenkins and Docker

    Will Plaut

    By Will Plaut
    May 25, 2020

    Pipes

    Photo by Tian Kuan on Unsplash

    When deciding on how to implement continuous integration (CI) for a new project, you are presented with lots of choices. Whatever you end up choosing, your CI needs to work for you and your team. Keeping the CI process and its mechanisms clear and concise helps everyone working on the project. The setup we are currently employing, and what I am going to showcase here, has proven to be flexible and powerful. Specifically, I’m going to highlight some of the things Jenkins and Docker do that are really helpful.

    Jenkins

    Jenkins provides us with all the CI functionality we need and it can be easily configured to connect to projects on GitHub and our internal GitLab. Jenkins has support for something it calls a multibranch pipeline. A Jenkins project follows a repo and builds any branch that has a Jenkinsfile. A Jenkinsfile configures an individual pipeline that Jenkins runs against a repo on a branch, tag or merge request (MR).

    To keep it even simpler, we condense the steps that a Jenkinsfile runs into shell scripts that live in /scripts/ at the root of the source repo to do things like test or build or deploy, such as /scripts/test.sh. If a team member …


    jenkins docker containers groovy

    Creating a Messaging App Using Spring for Apache Kafka, Part 3

    Kürşat Kutlu Aydemir

    By Kürşat Kutlu Aydemir
    May 21, 2020

    Spring-Kafka

    Photo by Pascal Debrunner on Unsplash

    This article is part of a series. The GitHub repository with code examples can be found here.

    In this article we’ll create the persistence and cache models and repositories. We’re also going to create our PostgreSQL database and the basic schema that we’re going to map to the persistence model.

    Persistence

    Database

    We are going to keep the persistence model as simple as possible so we can focus on the overall functionality. Let’s first create our PostgreSQL database and schema. Here is the list of tables that we’re going to create:

    • users: will hold the users who are registered to use this messaging service.
    • access_token: will hold the unique authentication tokens per session. We’re not going to implement an authentication and authorization server specifically in this series but rather will generate a simple token and store it in this table.
    • contacts: will hold relationships of existing users.
    • messages: will hold messages sent to users.

    Let’s create our tables:

    CREATE TABLE kafkamessaging.users (
        user_id BIGSERIAL PRIMARY KEY,
        fname VARCHAR(32) NOT NULL,
        lname VARCHAR(32) NOT NULL,
        mobile VARCHAR(32) NOT NULL,
        created_at …

    java spring frameworks kafka spring-kafka-series

    Shopify Admin API: Importing Products in Bulk

    Patrick Lewis

    By Patrick Lewis
    May 4, 2020

    Cash Register Photo by Chris Young, used under CC BY-SA 2.0, cropped from original.

    I recently worked on an interesting project for a store owner who was facing a daunting task: he had an inventory of hundreds of thousands of Magic: The Gathering (MTG) cards that he wanted to sell online through his Shopify store. The logistics of tracking down artwork and current market pricing for each card made it impossible to do manually.

    My solution was to create a custom Rails application that retrieves inventory data from a combination of APIs and then automatically creates products for each card in Shopify. The resulting project turned what would have been a months- or years-long task into a bulk upload that only took a few hours to complete and allowed the store owner to immediately start selling his inventory online. The online store launch turned out to be even more important than initially expected due to current closures of physical stores.

    Application Requirements

    The main requirements for the Rails application were:

    • Retrieving product data for MTG cards by merging results from a combination of sources/APIs
    • Mapping card attributes and metadata into the format expected by the Shopify Admin API …

    saas ecommerce ruby rails

    Creating a Messaging App Using Spring for Apache Kafka, Part 2

    Kürşat Kutlu Aydemir

    By Kürşat Kutlu Aydemir
    April 29, 2020

    Spring pasture

    This article is part of a series. The GitHub repository with code examples can be found here.

    In this part I’ll walk through Kafka’s servers and processes, the basics of spring-kafka producers and consumers, persistence, and caching configurations.

    Kafka Servers

    Kafka uses Apache ZooKeeper as the distributed coordination server. You can download the Apache Kafka with ZooKeeper bundle here.

    When you download and untar the Kafka bundle Kafka’s console scripts can be found in the bin directory. To enable Kafka connectivity and prepare the Kafka configuration let’s start the Kafka servers and see how to create Kafka topics and test console producers and consumers.

    ZooKeeper

    To start ZooKeeper with the default properties run the following command:

    bin/zookeeper-server-start.sh config/zookeeper.properties
    

    Kafka Server

    A single Kafka server with the default properties can be started with following command:

    bin/kafka-server-start.sh config/server.properties
    

    Kafka Topics

    Creating Kafka Topics

    Let’s create a test Kafka topic:

    bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic myTestTopic
    

    List Topics

    To list all previously created Kafka …


    java spring frameworks kafka spring-kafka-series

    Convenient Reporting with Jasper

    Árpád Lajos

    By Árpád Lajos
    April 28, 2020

    Basalt pillars

    Business Intelligence (BI) reporting is a huge problem space in custom software. There’s a wide range of business needs for looking at past and predictive behavior. Building a reporting tool can be a very cost effective way to get this data, especially compared to writing individual queries or manually generating reports.

    I’ve been working with Jasper in the Java project space and wanted to write about some research I’ve collected on the topic.

    JasperReports takes .jrxml files as input and outputs a .jasper report. Possible output targets include:

    • Screen
    • Printer
    • PDF
    • HTML
    • Excel files
    • RTF
    • ODT
    • CSV
    • XML

    Jasper history

    • June 2001: Teodor Danciu began working on JasperReports.
    • September 2001: Jasper was registered on SourceForge.
    • November 2001: JasperReports 0.1.5 was released.
    • 2004: Panscopic teamed up with Teodor Danciu, acquired ownership of the product and changed its name to Jaspersoft.
    • 2005: JasperReports 1.0 was released.
    • 2007: Brian Gentile became CEO of the company.
    • 2014: TIBCO acquired Jaspersoft for ~$185 million.

    Best reporting tools

    Let’s compare some popular reporting tools:

    • JasperReports is a free and open source Java-based reporting tool, which supports lots of …

    java reporting pdf

    Migrating large PostgreSQL databases

    Árpád Lajos

    By Árpád Lajos
    April 21, 2020

    Migration

    Photo by Harshil Gudka on Unsplash

    The challenge

    One of our clients has a large and important health-related application. It’s built on an end-of-life Ruby on Rails-based open source framework, heavily customized over the years. They wanted to upgrade to a newer, supported, Java-based open source application a partner organization had developed as a replacement. Both organizations used the old system previously. To do that we would need to migrate all their existing PostgreSQL data from the old system to the new one, retaining important customizations while adapting to the new database schema.

    Although there were many similarities between the old system and the new, the differences were significant enough to require careful study of the database schemas and the migration scripts designed to move the data:

    • There were schema-level differences between our old database and the partner organization’s old database.
    • Even where the two old databases were similar there were differences on the data level, such as different standards, different values in table records, different representation, etc.
    • We had different content, so if a script was working well for their data, it was not …

    postgres big-data database
    Previous page • Page 31 of 217 • Next page