Retrieving an uploaded PDF file from an iMIS database

The developers at ASI, which produces the engagement management system iMIS, made an interesting choice for file management. Images uploaded through RiSE, iMIS’s content management system, are stored in a server operating system folder and can be accessed directly with a URL like https://www.example.org/images/sample.jpg. Document files, on the other hand, are stored as content records in the iMIS database, and can only be retrieved using special JavaScript links on a webpage that’s part of the iMIS website. What’s more, such links work only if they exist in the HTML that’s present on the initial page load; if you try inserting links into the HTML using JavaScript, they do nothing.

I still haven’t worked out exactly what sort of black magic happens behind the scenes to retrieve those files from the database when a user clicks on one of the special links, and generally speaking, it’s probably best to use the built-in mechanism for retrieving files. But what if you absolutely need to grab a specific file from the database on demand without using one of the preexisting links?

I’ve determined that it is possible to retrieve and deliver a PDF file stored in the database using the iMIS API. I’ll explain how you can do the same using a custom IQA query, an iPart containing the pdf-lib JavaScript library and the dandavis JavaScript download script, and a bit of JavaScript and jQuery.

About my environment

I should begin by mentioning that I’m developing using a self-hosted instance of iMIS 20.2.65.9955.

I have not tested these methods using an ASI-hosted iMIS installation or with any other version of iMIS.

Creating your IQA query

We’ll begin by using RiSE’s Intelligent Query Architect section to create our custom query. For the purpose of this tutorial, I’m using a folder named KB, and I’m giving my query the name DownloadPDF.

When you create your new IQA query, on the Sources tab, begin by adding Document and Hierarchy business objects listed in $/Common/Business Objects, then add an additional copy of each of those business objects. Join your sources on Document.Document Version Key = Hierarchy.UniformKey, Hierarchy.ParentHierarchyKey = Hierarchy1.HierarchyKey, and Hierarchy1.UniformKey = Document1.Document Version Key.

Screenshot of IQA Sources tab

On the Filters tab, specify that Document.Document Status Code must equal 40, Document.Document Name must equal “@url:file”, and Document1.Document Name must equal “@url:older”.

Screenshot of IQA Filters tab

On the Display tab, select Document.Document Name and give it an Alias of File, and select Document1.Document Name and give it an Alias of Folder. Add a custom SQL Expression of CAST(vBoDocument.Blob as VARBINARY(max)) and give it an Alias of FileContents.

Screenshot of IQA Display tab

Finally, be sure to save your query.

Creating your iPart

I’m using pdf-lib to prepare the contents of PDFs stored in the iMIS database for end users. Even the minified version of pdf-lib weighs in at around half a megabyte in size, which is too large to stuff into a Content Html iPart in a RiSE webpage record, so you can work around that limitation by creating a client-based iPart containing the pdf-lib JavaScript file.

In addition to pdf-lib, I’m using the dandavis download script to handle delivery of PDF files to users’s browsers.

For the purposes of this tutorial, I’m naming my iPart KBpdflib.

Download both scripts and place pdf-lib.min.js and download.min.js in a folder on your computer. In the same folder, save a third file named index.html with the following contents, replacing KBpdflib with whatever name you’re using for your iPart:

<script src="/Areas/KBpdflib/pdf-lib.min.js"></script>
<script src="/Areas/KBpdflib/download.js"></script>

Place both JavaScript files and your newly-created index.html in a ZIP file named KBpdflib.

Uploading your iPart

Navigate to RiSE > Document system, then open the ContentItems directory. Go to New > Zip file and select the ZIP file you created.

Next, navigate to RiSE > Maintenance > ContentTypes. If desired, create a subfolder by going to New > Folder, then go to New > Content Type. Give your iPart a name (e.g., KBpdflib) and, if desired, a description; set both URL fields to ~/Areas/KBpdflib/index.html, where KBpdflib equals the name of the ZIP file you uploaded; and then save your Content Type record.

Finally, navigate to RiSE > Maintenance > Deploy Content Items and click the Deploy Content Items button. Assuming everything processes normally, your iPart should now be uploaded.

Identifying a PDF to download

Navigate to RiSE > Page Builder > Manage files. If you have not previously uploaded any PDF files, you’ll need to upload one now; otherwise, make a note of the names of an existing file and the folder in which it exists.

For the purposes of this tutorial, I’m using a file named KBTest.pdf located in the folder named KB.

Creating a webpage to download the PDF file

The heavy lifting is finished at this point; all that’s left to do is create a webpage that makes use of your IQA query and the iPart you created. To do that navigate to RiSE > Page Builder > Manage content; after selecting the folder where you want to store your page, go to New > Website Content.

Give your page a Title and Publish file name, then click Add Content. Select the iPart you uploaded earlier and click OK to insert it into the new page.

Next, click Add Content again and insert a Content Html iPart. Configure that iPart to contain the following HTML code:

<h1>DownloadPDF</h1>
<div id="json-results">
    <label for="kb-folder-name">Folder</label>
    <input id="kb-folder-name" name="kb-folder-name" type="text">
    <label for="kb-file-name">File</label>
    <input id="kb-file-name" name="kb-file-name" type="text">
    <button id="kb-submit" name="kb-submit" value="Submit">Submit</button>
</div>

After that, click Add Content one more time and insert a second Content Html iPart. Configure that iPart to contain the following JavaScript code:

<script type="text/javascript">

    const noResults = "PDF not found.";
    const ajaxError = "The PDF failed to load. Please try again.";
    
    
    document.getElementById("kb-submit").addEventListener("click", function(event) {
        event.preventDefault();
        downloadFile(document.getElementById("kb-folder-name").value, document.getElementById("kb-file-name").value);
    });
    
    
    // retrieve JSON for specified PDF
    function downloadFile(folder, file) {
    
        // maximum number of results to be returned
        const maxResults = 10;
        
        // set URL for API call to retrieve PDF
        let apiURL = "/api/IQA?QueryName=$/KB/DownloadPDF&folder=" + folder + "&file=" + file + "&Limit=" + maxResults;
        
        // make ajax call to API
        jQuery.ajax(apiURL, {
            type: "GET",
            contentType: "application/json",
            headers: {
            
                // we pass __RequestVerificationToken value from webpage so API will return results
                RequestVerificationToken: document.getElementById("__RequestVerificationToken").value
            },
            success: function(data) {
            
                // display results if any were found
                if (data["TotalCount"] > 0) {
                
                    let fileName = "";
                    let folderName = "";
                    let fileContents = "";
                    
                    // loop through values in JSON string
                    for (let i = 0; i < data["Items"]["$values"].length; i++) {
                    
                        // get properties for specific record, then loop through them
                        let record = data["Items"]["$values"][i]["Properties"]["$values"];
                        for (let j = 0; j < record.length; j++) {
                            
                            if (record[j].Name == "File") {
                                fileName = record[j].Value;
                            } else if (record[j].Name == "Folder") {
                                folderName = record[j].Value;
                            } else if (record[j].Name == "FileContents") {
                                fileContents = record[j].Value["$value"];
                            }
                        }
                    }
                    
                    // call script to generate PDF
                    generatePDF(fileName, fileContents);
                } else {
                    alert(noResults);
                }
            },
            error: function() {
                alert(ajaxError);
            }
        });
    }
</script>
<script>
    const { degrees, PDFDocument, rgb, StandardFonts } = PDFLib;

    async function generatePDF(fileName, fileContents) {
    
        // load file contents retrieved from API
        const templatePdfBytes = fileContents;
        const templateDoc = await PDFDocument.load(templatePdfBytes);
        
        // serialize PDF document to bytes (a Uint8Array)
        const pdfBytes = await templateDoc.save();
        
        // trigger browser to download the PDF document
        download(pdfBytes, fileName, "application/pdf");
    }
</script>

Click the Save and Publish button to save your new page, then access the page using your browser. Enter you folder and file name in the appropriate input fields and click the Submit button, and the browser should indicate it is downloading the specified PDF.

So, what exactly is going on here? After passing a folder name and file name as part of our API call in the downloadFile function, we’re taking the binary data for the PDF that the API returns and passing it into the generatePDF function, then using PDFDocument.load to provide that data to pdf-lib. pdf-lib then turns it into a downloadable PDF. Pretty neat!

Caveats

This approach does make it possible to deliver PDF files for which links did not exist on a RiSE webpage at the time the page was initially loaded, but there are a few caveats:

  1. The IQA we created assumes that you have no duplicated folder name/file name combinations. If you have multiple folders with the same name in RiSE, and each of those folders contains files with the same names, the IQA will return data for all matching folder/file combinations. If, on the other hand, you have no folder name duplication, then there’s no problem.
  2. Retrieving a PDF via the API is slower than using iMIS’s built-in JavaScript links. For relatively small files, the difference may not be significant, but in my testing, multi-megabyte PDFs take significantly longer to download when retrieving the data via the API. With the JavaScript links, the user’s browser will at least display an indication of progress as the file is download; using the API, there’s no indication of any progress until the data has completely downloaded and is ready to go.
  3. An end user could access any published PDF file that exists in RiSE if he or she knows or can guess the folder and file name and has sufficient permissions to access the folder and file. Before implementing the approach outlined here in a production scenario, you’ll want to ensure any PDF content records that should not be accessible to all users have their permissions set appropriately in RiSE.

In spite of those potential issues, this approach could still be useful. For example, you could create another IQA that retrieves the names of all PDF files stored in a particular RiSE folder, use that query to dynamically generate a list of links on your webpage, and have each link kick off downloading a PDF via the API.

Identifying the cause of broken sorting in iMIS Query Menu iPart

I’m back with my latest installment in Weird Things I’ve Found in iMIS version 20.2.65.9955. In this episode, we take a look at the Query Menu iPart and broken sorting.

I created a page with multiple instances of the Query Menu iPart, each of which loaded the results of a different IQA query. Everything looked pretty.

This made me happy.

Then I received a report that the sorting was broken: you could sort the query results in any of the Query Menu iParts one time, but nothing appeared to happen after you clicked any other column heading until you reloaded the page.

This made me sad.

Some investigation with Google Chrome’s Developer Tools revealed not a JavaScript problem, but that a 500 Internal Server Error was occurring on the second and subsequent clicks. Server event logs revealed an “invalid viewstate” error originating in Asi.Modules.ViewStateExceptionModule.

Next, I created a test page within RiSE, added the Query Menu iPart to it, and selected an IQA query. I published the page, viewed the page…and was able to sort the results as many times as I liked. After that, I tried removing all but one Query Menu iPart instance from my original page, published that page…and saw the same error as before.

After much headscratching and puzzled grunting, I finally realized the difference. I had accessed my test page, on which the Query Menu functioned as expected, at its actual location the server; e.g., http://www.example.org/ABC/Testbed/Test.aspx. In contrast, I had accessed my page where the Query Menus were not functioning as expected using the full URL as specifed under sitemaps; e.g., http://www.example.org/ABC/Portals/PortalA/ABC/Portals/PortalA/Default.aspx.

After that “aha” moment, I tried stripping the sitemap portion from my original page’s URL, accessing it directly instead (http://www.example.org/ABC/Portals/PortalA/Default.aspx). Once I did that, I was able to perform multiple sorts of the data returned in the various Query Menu instances with no further server errors.

My best guess at this point is that something about the Telerik code used to build the Query Menu iPart doesn’t play nicely with whatever black magic is happening behind the scenes to make the full URL work. Whatever the case, I’m happy to know I can use the workaround of accessing the page directly instead of including navigational structure in the URL.

iMIS displays generic error when user attempts to download uploaded file

While uploading and linking to PDF files in RiSE with iMIS version 20.2.65.9955, I encountered an interesting bug, but I also identified a workaround. Today, I’ll share both the bug and the workaround here.

The particular page with which I was working uses the Content Collection Organizer iPart to display content from other content records within tabs. I observed that if I create a link to a PDF that has been uploaded in RiSE in the content record for one of the tab content areas, or subpages if you like, then publish the record, the website displays a generic error when I click the link to download the PDF:

An unexpected iMIS error has occurred. Please try your operation again; if you still receive an error, contact the system administrator.

That’s not very helpful, so I took a look at Event Viewer on the server and noted an HttpException with the following message:

Exception message: Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request.

Interesting. Something’s happening in the iMIS/RiSE back-end code, then, which I can’t modify.

I did identify a workaround, however. If I create a download link in the main content record and publish that record, the links in the tabbed areas then function normally! Creating a standard link (e.g., with an href value of “#” or “/”) does not make this work correctly; the link must be in the format that RiSE uses when you link to a PDF that was uploaded to RiSE—i.e., with an href value like “javascript://[*]”.

The link apparently does not have to contain any text, however; it simply must exist. The presence of the following in the main page’s content record is sufficient:

<a href="javascript://[]"></a>

The link is not visible to the user because there’s no text, but it is the “magic sauce” that makes the PDF links within the tab content function as expected.

Using PHP and curl to post JSON data to the iMIS API

As a programming challenge, I recently decided to tackle using PHP and curl to connect to the iMIS API from outside the confines of RiSE. It’s relatively simple to get data from the iMIS API when you’re already logged in to an iMIS website, but I wanted to figure out how to post data to the API from an entirely different server. Documentation refers to this as direct access.

For my experiment, I created a PHP file on an external server. From a webpage within an instance of iMIS, I posted JSON data to my PHP file, which in turn retrieved an authorization token from iMIS and then used that token to submit the data to to the API.

<?php


// full URL of iMIS site
$url = "https://www.example.org";

// iMIS user's credentials
$username = "testuser";
$password = "testpassword";


if ($_SERVER["REQUEST_METHOD"] == "POST") {

    // JSON submitted by POST
    $json = file_get_contents("php://input");
    
    // ensure API URL and JSON are defined
    if ($_REQUEST["url"] != null && $json != null) {
    
        // address from which we get a token
        $tokenURL = $url . "/token";
        // API address to which we post data
        $apiURL = $url . "/api" . $_REQUEST["url"];
        
        callAPI($tokenURL, $username, $password, $apiURL, $json);
    } else {
    
        header("HTTP/1.0 401 Bad Request");
        
    $html = <<<EOT
<!DOCTYPE html>
<html lang="en-US">
    <head>
        <meta charset="utf-8">
        <title>401 Bad Request</title>
    </head>
    <body>
        <p>401 Bad Request</p>
    </body>
</html>
EOT;
        
        echo $html;
    }
}


// used to pass Ajax call to API
function callAPI($thisTokenURL, $thisUsername, $thisPassword, $thisAPIURL, $thisJSON) {

    // grab an authorization token to send to API with POST
    $token = getToken($thisTokenURL, $thisUsername, $thisPassword);
    
    // token length will be this short only if an HTTP error status code was returned
    if (strlen($token) < 5) {
        header("HTTP/1.0 " . $token);
    } else {
    
        // this is the header we will send to API
        $header = array("authorization: Bearer " . $token, "Content-Type: application/json");
        
        // initiate curl instance
        $curl = curl_init();
        
        curl_setopt_array($curl, array(
            CURLOPT_URL => $thisAPIURL,
            CURLOPT_HTTPHEADER => $header,
            CURLOPT_SSL_VERIFYPEER => false,
            CURLOPT_RETURNTRANSFER => true,
            CURLOPT_POST => true,
            CURLOPT_POSTFIELDS => $thisJSON,
            CURLOPT_FAILONERROR => true
        ));
        
        $response = curl_exec($curl);
        
        // tell browser the result of the call
        header("HTTP/1.0 " . curl_getinfo($curl, CURLINFO_RESPONSE_CODE));
        
        curl_close($curl);
        
        return;
    }
}


// retrieve token for use in API call
function getToken($thisTokenURL, $thisUsername, $thisPassword) {

    // this is the username and password we will send
    $content = "grant_type=password&username=$thisUsername&password=$thisPassword";
    // this is the header we will send
    $header = array("Content-Type: application/x-www-form-urlencoded");
    
    $curl = curl_init();
    
    curl_setopt_array($curl, array(
        CURLOPT_URL => $thisTokenURL,
        CURLOPT_HTTPHEADER => $header,
        CURLOPT_SSL_VERIFYPEER => false,
        CURLOPT_RETURNTRANSFER => true,
        CURLOPT_POST => true,
        CURLOPT_POSTFIELDS => $content,
        CURLOPT_FAILONERROR => true
    ));
    
    $response = curl_exec($curl);
    
    $json = null;
    $returnStr = "";
    
    // return HTTP status code if there was an error; otherwise, return token
    if (curl_errno($curl)) {
        $returnStr = curl_getinfo($curl, CURLINFO_RESPONSE_CODE);
    }
    else {
        $json = json_decode($response, true);
        $returnStr = $json["access_token"];
    }
    
    curl_close($curl);
    
    return $returnStr;
}


?>

Naturally, you wouldn’t use something unsecured like this in a production environment; with the iMIS credentials pre-populated, anyone who hit the page could submit data to the API with no questions asked! Definitely a no-go. In addition, this PHP code retrives a new token every time it runs; that token should be saved and re-used until it expires.

Nevertheless, figuring out how to make this work was an interesting exercise, and I was able to connect to the iMIS API from outside the confines of RiSE. Such knowledge could come in handy somewhere down the road.

iMIS API returns “An error occurred while constructing the query”

While working with business objects, IQA, and the API for iMIS 20.2.65.9955, I recently encountered a strange error or undocumented limitation that had me scratching my head for a bit until I figured out what was happening.

To summarize, I created a business object in RiSE, then used that business object to build an IQA query. I was able to run the IQA query and view the results within RiSE with no problems. I was also able to run the generated SQL query displayed on the IQA Summary tab directly against the database without encountering any errors.

When I attempted to use the iMIS API to retrieve the query results, however, the API returned the message, An error occurred while constructing the query. This didn’t make much sense to me since the query ran just fine within RiSE. What was going on?

After some experimentation, I determined that the API returns that error if the IQA query being called uses a business object with a name greater than 32 characters long. In other words, a business object named “KB1_MyBusinessObjectNameIsTooLong” will cause problems, but a business object named “KB1_MyBusinessObjNameIsJustRight” will not. Without knowing what the API is doing behind the scenes, I can’t explain exactly why this happens.

The solution is, of course, not to use business object names more than 32 characters long if you intend to retrieve the results of an IQA query using the iMIS API.

How to retrieve and display data using IQA and the iMIS API

In April, I started a new job. Among my duties at this point is working on converting the organization’s C#-based web parts to IQA queries and JavaScript widgets that make use of the API provided by ASI’s iMIS.

ASI’s API documentation is very thorough in some places, and frustratingly fragmentary in others. My goal here is to explain how to create a basic IQA query, which is easy enough, and to provide working HTML and JavaScript that will allow you to run that query that returns records with a specific last name and display the results to an iMIS-based website user.

As a reference point, I am using iMIS 20.2.65.9955 as the basis for this tutorial.

Building your IQA query

To begin, access your iMIS staff site, perhaps at https://www.example.org/Staff, and go to RiSE > Intelligent Query Architect. You’ll see a number of existing folders. I recommend creating a folder and if necessary subfolders of your own so that you can easily keep track of the queries that you create. For purposes of the JavaScript code I’ll share later on, I’ll name my folder MyTests.

Click New > Query, then enter a name for your query. For purposes of this tutorial, I’m going to name my query LastName-Test.

  • On the Sources tab, click Add Source. Double-click the CsContact business object listed in the window that appears, or select it and click OK.
  • On the Filters tab, select Full Name from the Property select box in the first row, then click + to ensure the filter is added to the query. In the second row, select Name (Last, First) from the Property select box, enter "@url:LastName" (with the double quotes) in the corresponding Value box, and then click + in that row.
  • On the Display tab, you can choose which pieces of information you want available in the query. The only one with which we’re concerned for this tutorial is Full Name, which is checked by default.
  • On the Sorting tab, choose Name (Last, First) from the Property select box, then click + to ensure the sorting is added to the IQA.
  • Finally, if you want everyone to be able to see the results even if they’re not logged in to the website, select Share (Everyone) on the Security tab.

Click Save. Your IQA query is complete!

Displaying IQA results on a webpage

iMIS offers some iParts such as Query Menu that do a fine job of displaying results so long as you don’t need to apply special formatting or manipulate it in some way. For this tutorial, I’m not going to use those iParts, but will instead demonstrate how to use HTML and JavaScript including jQuery which is part of the iMIS website in order to display results.

On the iMIS staff site, navigate to RiSE > Page Builder > Manage content, navigate to a folder, and choose New > Website Content. Enter a Title and Publish file name, then click Add content and choose the Content HTML iPart. Select the HTML tab and then enter the following HTML:

<div id="imis-json-results" class="json-results">
    <p id="imis-json-results-loading" class="loading-results">Loading results…</p>
</div>

This is the container into which we’ll load the results retrieved from our API call.

Next, enter the following JavaScript:

<script type="text/javascript">
    // we’ll display these messages only if results can’t be displayed
    var msgNoResults = "No results found.";
    var msgAjaxError = "The results failed to load. Please try again later."
    
    // create unordered list for insertion into DIV#imis-json-results
    var resultList = document.createElement("ul");
    resultList.id = "imis-results";
    
    // set URL for API call to retrieve names
    // note the QueryName parameter includes folder name and IQA query name
    // the LastName parameter is used due to the "@url:LastName" that we entered while building our IQA query
    // the Limit parameter defines the maximum number of results to be returned
    var apiURL = "/api/IQA?QueryName=$/MyTests/LastName-Test&LastName=Smith&Limit=500";
    
    // make ajax call to API to retrieve names
    jQuery.ajax(apiURL, {
        type: "GET",
        contentType: "application/json",
        headers: {
        
            // this line retrieves the __RequestVerificationToken value that iMIS automatically populates onto the webpage, eliminating the need for separate authentication
            RequestVerificationToken: document.getElementById("__RequestVerificationToken").value
        },
        success: function(data) {
        
            // if you want to see raw data returned by API, uncomment following line and view results in web browser’s developer console
            // console.log(data);
            
            // display results if any were found
            if (data["TotalCount"] > 0) {
            
                // loop through values in JSON string
                for (var i = 0; i < data["Items"]["$values"].length; i++) {
                    var fullName = "";
                    
                    // get properties for specific record, then loop through them
                    var record = data["Items"]["$values"][i]["Properties"]["$values"];
                    for (var j = 0; j < record.length; j++) {
                        if (record[j].Name == "FullName") {
                            fullName = record[j].Value;
                        }
                    }
                    
                   // create list item, list item to unordered list created earlier
                   var resultItem = document.createElement("li");
                   resultItem.innerHTML = fullName;
                   resultList.appendChild(resultItem);
               }
           }
           
           // eliminate loading message
           var loadingElem = document.getElementById("imis-json-results-loading");
           loadingElem.parentElement.removeChild(loadingElem);
           
           // append results or message indicating no results were found to DIV#imis-json-results
           if (data["TotalCount"] > 0) {
                document.getElementById("imis-json-results").appendChild(resultList);
            }
            else {
                var noResultsP = document.createElement("p");
                noResultsP.innerHTML = msgNoResults;
                document.getElementById("imis-json-results").appendChild(noResultsP);
            }
        },
        error: function() {
            // eliminate loading message
            var loadingElem = document.getElementById("imis-json-results-loading");
            loadingElem.parentElement.removeChild(loadingElem);
            
            // append ajax error message to DIV#imis-json-results
            var ajaxErrorP = document.createElement("p");
            ajaxErrorP.innerHTML = msgAjaxError;
            document.getElementById("imis-json-results").appendChild(ajaxErrorP);
        }
    });
</script>

Again, the Query Menu iPart may be sufficient for displaying very basic lists, but for more involved projects, this should at least give you a starting point for getting data out of your iMIS database using the API.

Diagnosing a bad laptop hard drive

Last week, my father-in-law asked me to take a look at his Dell Inspiron laptop running Windows 10. He said he had left it powered on but not actively used it for a while, and when he picked it up to try to do something, it didn’t boot up normally, but instead entered Windows Recovery.

In hopes of an easy fix, I began my troubleshooting by simply choosing the Continue option to exit and continue to Windows 10, but the laptop after a long delay ended up back in Windows Recovery again. That time around, I selected the Troubleshoot option. Selecting System Restore revealed no restoration points, so I next tried selecting System Repair, but was shortly informed that the system could not be repaired without any particular reason being given.

At that point, my hopes of a speedy resolution were evaporating pretty quickly, but I still wanted to try to determine what was wrong. I selected the Command Prompt option so I could try taking a look at the file system; then, in the command prompt window that opened, I entered c:. After another long pause, the following message was displayed:

The volume does not contain a recognized file system.
Please make sure that all required file system drivers are loaded and that the volume is not corrupted.

Well, that didn’t sound good! No recognized file system detected? My gut feeling was that the hard drive was dying, but since my father-in-law said he didn’t have any files of notes on the computer, attempting to reinstall Windows seemed worthwhile.

I entered exit to get out of the command prompt, then inserted a Windows 10 installation disc into the laptop’s optical drive, selected the Use a device option, and chose EFI DVD/CDROM from the list of devices. After the laptop rebooted, when I was prompted to press any key to boot from CD or DVD, I pressed Enter, and after a period of time, Windows Setup loaded.

With the appropriate language, time and currency format, and keyboard or input method selected, I clicked the Next button, then clicked Install now. I accepted the license terms and clicked Next, selected Custom: Install Windows only (advanced) option, and then selected the 452 GB partition that was listed as Primary partition type (rather than System, Recovery, etc.).

When I selected that partition, Windows Setup displayed a message indicating Windows couldn’t be installed on the partition, so I clicked for details, at which point Windows Setup reported the following:

Windows cannot be installed to this disk. The disk may fail soon. If other hard disks are available, install Windows to another location.

That certainly eliminated any lingering doubts that I may have had about the hard drive being the problem: Windows Setup wouldn’t even attempt to install Windows 10 to the existing hard drive!

The laptop’s existing hard drive is not especially old, but it is a super-slow 500 GB Toshiba 5400 RPM drive. I’ve ordered a 240 GB Seagate BarraCuda SSD to replace it; although it’s only half the size of the original drive, it should run rings around the original, and for what my father-in-law uses the computer for—web browsing and email—it will be more than adequate.

Very basic network troubleshooting

In a former technical support role, my colleagues and I received numerous calls and emails from customers regarding our network-connected devices that “weren’t communicating.” In some cases, those customers had legitimate complaints: network cards occasionally needed to be rebooted, and every once in a while an Ethernet adapter would actually fail completely and need to be replaced.

It was at least as common, however—and I think I could make a strong argument that it was more common—for the connectivity problems to not be related to my employer’s hardware at all! I wouldn’t necessarily expect the average user to perform network troubleshooting, but having to almost beg IT staff with some organizations to check their own network and their own equipment got old in a hurry.

With that in mind, I thought it would be worthwhile to compile a few basic troubleshooting tips that often helped me and the customers I was supporting determine whose team really needed to look into the problem. This is not intended to be a complete list of potential problems, but if you are new to technical support, or even if you are simply an end user trying to figure out whom you need to contact, these things may get you pointed in the right direction.

#1: Is the device turned on?

This one is so obvious that I almost hate to even ask the question, but seriously: is the device turned on? Are you sure it’s turned on? If someone disconnected the power adapter, or if your electrician flipped the circuit breaker so he or she could work on an electrical issue, and the device in question is powered off, you’re not going to be able to communicate with it.

#2: Is the device connected to the network?

I could just as easily have made this #1. Again, I hate to ask the question, but if we’re talking about a hard-wired device, does it have an Ethernet cable connected to it? Is the other end of the Ethernet cable connected to anything? Are there any Ethernet cables hanging loose at the nearest network switch?

Likewise, if the device in question connects to your network via Wi-Fi, does it actually show as being connected? Can you even see the network’s SSID if you take a quick peek at your phone?

In either case, if the device is not connected to the network, either physically or via Wi-Fi, you’re not going to be able to communicate with it.

#3: Are you able to ping the device?

Assuming that you’ve already checked the first two items—and you did confirm that the device is powered on and connected to your network, right?—my next recommendation is to try pinging its IP address from another computer on your network.

If you’re using Windows, you can open a command prompt by pressing Windows + R, then entering cmd and clicking OK. In the command prompt window, type ping 10.10.10.10, replacing the IP address with your device’s IP address, and then press Enter. You’ll likely find one of the following:

  • If you get a response with time values, then something with that IP address is connected, and we can troubleshoot from there.
  • If you get a response indicating that the request timed out, there could be a problem with the device in question, or there could be an issue elsewhere on the network. Proceed to item #4. (Note that some devices are configured to not respond to ping, so the lack of a response here may not necessarily indicate a lack of network connectivity.)
  • If you get a response indicating the destination host unreachable, there is probably a network issue that your network staff will have to investigate. Proceed to item #4.
  • If you get a message stating that the TTL expired in transit, a network device is misconfigured, and your network staff will have to investigate. Proceed to item #4.

#4: What does tracert show?

Using the command prompt window that you opened previously, try entering tracert 10.10.10.10, once again replacing the IP address with your device’s IP address, and then press Enter.

Depending on your network, you may initially see IP addresses or sever names along with response times listed in milliseconds, but eventually you will probably see asterisks along with the message, “Request timed out.” Provide your network staff with the last IP address listed with response times, which is the last network device from which your computer got a response, and that may help them narrow down where the problem lies.

One exception to this is when there is a network misconfiguration. In that case, you may see the same pair or sequence of IP addresses repeated over and over again. Even if that’s the case, you’ll still need to send the information on to your IT staff for further investigation.

#5: Does a different device connected to the same Ethernet cable as the device you’re troubleshooting have network connectivity?

One other thing you can do is configure a laptop to use the same network settings (IP address, subnet mask, and default gateway) as the device that you’re troubleshooting, then disconnect the Ethernet cable from the problem device and connect the cable to your laptop. If your laptop has network connectivity, you’ve confirmed that the physical connection itself is good.

I should add that not having network connectivity at this point doesn’t necessarily mean that there’s a network-related problem. Depending on firewall and switch configurations, and whether or not your IT team is doing any sort of MAC filtering, it may be impossible to connect your laptop to the network in this way, but if it does work, then you can rule out the network being the problem.

Wrapping it up

Again, this is not an exhaustive list, but simply a few questions that I’ve commonly asked when attempting to troubleshoot problems with devices not communicating over a customer’s network.

Once you’ve worked your way through this list, if you still haven’t identified the problem, then it’s time to escalate the issue to your IT staff or the support team for the device in question. Doing these few basic checks first, however, can save you and everyone else some time.

How much Web server do I need?

Several years ago, I went into some detail on why I think you should have your own website if you work in or want to work in technical support. Industry professionals expect you to have a website, and you can learn a lot from creating and maintaining your own.

At some point, you may decide you also want to run your own Web server. Perhaps you will opt to use an old desktop system in your old home to do the heavy lifting, or maybe you’ll sign up for a virtual private server like the ones that I use. Either way, it’s very possible that you will be running some flavor of Linux as your server’s operating system.

This leads to a natural question: how much Web server do you really need? There is an excellent chance that the answer is “a lot less than you think.” By sharing my own experiences, I hope to help you make an educated decision.

What’s your goal?

The first thing you have to nail down is exactly what your needs are. If you intend to host a ton of high-definition videos, you may need a beefy setup, but if your goal is simply to run a blog or two, run your own mail server, or set up a simple e-commerce site—or maybe even do all of the above—then you’re not likely to tax even a server with relatively low resources.

I had already been tinkering with websites for years, first on free shared hosting and then on paid shared hosting, before I took the plunge into managing my own virtual servers. I had a rough idea about what sort of traffic I’d need to be able to handle—a few thousand visitors per month—and I knew I would be hosting scans of material from my stamp collection on the oldest of my websites, Philosateleia. In addition, I wanted to start managing my own mail server for the learning experience.

What I did

In the interest of getting experience with a common server configuration, I opted to run a LAMP stack. I also decided I would like to have two separate servers: one for my websites, and one for my mail server.

I’m using Ubuntu Server, which is command line only, on a pair of virtual servers. Each has 1 GB of RAM and 20 GB of disk space as well as unlimited bandwidth, but quite frankly, unless you’re streaming video or your site becomes the next Amazon, the amount of data transfer offered with any dedicated hosting plan should be more than adequate for any traffic you’re likely to see.

My Web server is running a couple of websites plus this blog. As I mentioned earlier, we’re talking a few thousand visitors per month, which isn’t bad considering the nature of my sites. My email server with maybe a dozen or so email accounts on it is running a combination of Postfix, Dovecot, and SpamAssassin. (My email server was previously running ClamAV, which I ended up uninstalling; more on that in a bit.)

And you know what? The servers I described above are more than adequate for all of the above, and I suspect they would probably be adequate for you. Running on what are essentially bargain basement virtual private servers, I have not encountered any problems with resource demands, except…

A note on ClamAV

When I first set up my servers, ClamAV ran flawlessly, but as the years passed, I started seeing emails that had not been scanned for viruses. A bit of research and poking around in log files led me to the realization that my server didn’t have sufficient resources for ClamAV to run.

According to ClamAV’s documentation, a minimum of 1 GB of RAM is recommended in order to run it. My server just does have that, but with other packages running, it’s apparently not enough for ClamAV to run reliably.

I opted to simply uninstall ClamAV from my server. If you’re determined to have your emails scanned for viruses, I suggest going with a minimum of 2 GB of memory.

Sending Veeder-Root commands using C#

Connecting to a Veeder-Root tank level sensor unit or other automatic tank gauge unit that accepts Veeder-Root commands via telnet is a pretty simple task. I’ve discussed in the past how to do that using PuTTY, and of course there’s the good old telnet.exe included with Windows that can do the same thing.

But what do you do if you want to connect to a unit programmatically using a program written in C#?

If you’re working for an employer or customer who has tens or even hundreds of Veeder-Root units, a program of this sort may be not just handy, but necessary to speed up your day and save you from boredom.

I recently tackled a project to create a program with the sole purpose of updating the clocks on all of a customer’s Omntec units. Those units do not automatically update their internal clocks when daylight saving time begins or ends, so a unit that’s recording the correct time before daylight saving time ends up an hour behind after daylight saving time goes into effect, and so forth.

It is not my intent to reproduce my entire program here since it’s very much a niche product; however, I do want to briefly explain how to send Veeder-Root commands to a TLS unit from a C# program.

MinimalisticTelnet

What I didn’t want to do was completely reinvent the wheel when it comes to connecting to the customer’s units. Since C# telnet packages do exist, I figured one of those would probably be my best bet, and I ended up using MinimalisticTelnet, which is more than adequate for my purposes.

Connecting to an Omntec unit using the sample program provided with MinimalisticTelnet was simple enough. In the TelnetInterface.cs file, I did change the default value of TimeOutMs to 2000 instead of 100. The class is written so that the timeout is set as part of the Login function, but since the units to which I am connecting don’t require a login, that wasn’t going to work for me. (An alternative would have been to modify TelnetConnection so that I could pass in a timeout value, but I just needed something quick and dirty.)

Connecting to a unit was easy enough. Sending a command and getting a response was trickier.

A word about Veeder-Root commands

When connected to a Veeder-Root or compatible TLS unit via telnet, you can issue commands ranging from setting the current time on a unit, which is what I needed to do, to querying the unit for current tank levels, and much more.

You begin each command by entering Ctrl + A. That’s easy enough on a keyboard, but how was I supposed to make it happen programmatically? The sample program provided with MinimalisticTelnet doesn’t dive into sending special keystrokes.

The solution

After quite a bit of searching, and a lot of trial and error, I stumbled across a decade-old Rebex.net blog post that mentioned using \x3 if you need to send Ctrl + C using their telnet package. Maybe \x1 would work for Ctrl + A using MinimalisticTelnet’s implementation? I gave it a try…and it worked!

For your reference, my pared down resulting code looks something like this:

TelnetConnection tc = new TelnetConnection(thisHostName, thisPort);
if (tc.IsConnected)
{
    string prompt = "\x1" + {string representing Veeder-Root command};
    tc.WriteLine(prompt);
    string response = tc.Read();
}

Hopefully this will save you some time if you’re trying to automate the process of connecting to a group of Veeder-Root, Omntec, or compatible TLS/ATG units.