Tag: Google App Scripts Page 1 of 2

Google Sheets Blogging CMS, part 2

This is the second post in a three part series on using Google Sheets as the database for  a blogging CMS. In this post, I’ll walk you through the Google Sheet itself and the Google scripts (their version of js) that drive it. In the first post, I explained the motivations for building the system. In the third post, I’ll share the website that displays the blog, and the code behind it. My guess is that interest in the three pieces will vary for different audiences, so I wanted to encapsulate the descriptions.

Inspired by Tom Woodward and Martin Hawksey, I’ve been using Google Sheets as a database for various projects for a while now. I’ve written scripts to collect Hypothes.is web annotations from their APIsave the choices of players going through choose-your-own-adventure Twine Games, identify unused Domains of One’s Own within OU Create, and track my own writing both here on my blog and on various article and book projects. Google Sheets basically provide CSVs that can be written and read via API.

The Limitations of Google Forms

When we were brainstorming the interface for Prof. Jenel Cavazos‘s psychology 1113 class blog, I wanted to use a form that didn’t require a login to collect blog posts and store them in a database. I also wanted a form that wouldn’t strain OU Create’s servers. Alan Levine’s Splot forms would do a great job of collecting the blog posts, but I worried that if all 950 of Prof. Cavazos’s students submitted assignments at the same time, it would crash our OU Create servers. Google Forms satisfied both conditions and seemed an obvious choice.

One problem is that you cannot submit files via Google Forms. We wanted students to be able to submit ‘featured images’ for their blog posts in the way that WordPress uses featured images. Google Forms also doesn’t have a way of collecting rich text in long form text entry fields.

A blog with no images and no rich text isn’t much of a blog, so I decided to create my own form instead of using Google’s. At first, I thought I would write and host a stand alone form and connect it to Google Sheets using Martin Hawksey’s HTTP Post methods. During the research for this idea, Tom Woodward suggested I look at Amit Agarwal’s work on handling file uploads within a form built in Google Scripts.

In his post and his blog more broadly, Amit Agarwal showed how to build a traditional html form in Google Sheets and use it to upload images into Google Drive. Agarwal also built a really clever interface in WordPress that will put together these custom forms. This is a great way to get really functional forms, but it requires purchase of licenses, and I wasn’t sure that Agarwal would release the code that I needed to connect the sheets with my final website. So, I used the method that Agarwal discussed in his blog post for accepting file uploads and spliced it with Hawksey’s work on writing to Google Sheets from Google Scripts. In the end, I created a form that collects all the information needed for a PSY1113 blog post, stores the ‘featured image’ in Google Drive (per Agarwal), and then records the information for the blog post to a row in a Google Sheet (per Hawksey).

Google Script Code Walk Through

Clicking on this link will create a copy of the Google Sheet I created. From within that sheet, you can click on Tools>Script Editor in the menu bar to work with both the custom form and the Google script for this project. In the next post in this series, I’ll share the code for the website, so that you can stand up a copy of the entire project and do whatever you want with it.

I’ve also put the code in a GitHub repository. You’re welcome to copy, fork, read-along, or do whatever you want with that (within the parameters of a GNU GPLv3 license). Below I’m going to walk through a few pieces of the code that I thought were particularly interesting.

function doGet(e) {
  var output = HtmlService.createHtmlOutputFromFile('forms.html').setTitle("Post to the PSY1113 Blog");
}

Within the Google Scripts, there are two files: server.gs and forms.html. The server.gs file is the primary file, the one that Google is running when we set up our web app. The key function within the file is this doGet which calls our form for the project. Rather than getting data from an external source, we create a forms.html file and get the data directly from it.

Once someone has submitted the form (I’ll discuss the form itself in a minute), the server.gs file runs a couple of functions in sequence. The first function, called uploadFileToGoogleDrive, takes the uploaded image and stores it in the sheet creator’s Google Drive in a directory called ‘Files Received.’

var dropbox = "Received Files";
var folder, folders = DriveApp.getFoldersByName(dropbox);
    
    if (folders.hasNext()) {
      folder = folders.next();
    } else {
      folder = DriveApp.createFolder(dropbox);
      folder.setSharing(DriveApp.Access.ANYONE, DriveApp.Permission.VIEW);
    }
    
    /* Credit: www.labnol.org/awesome */
    
var contentType = data.substring(5,data.indexOf(';')),
        bytes = Utilities.base64Decode(data.substr(data.indexOf('base64,')+7)),
        blob = Utilities.newBlob(bytes, contentType, fileName),
        file = folder.createFolder([lastName, title].join(" ")).createFile(blob),
        id = file.getId();

This bit of code checks to see if you have a directory called ‘Files Received.’ If not, it creates the directory. Then it creates a subdirectory based on the last name and title collected in the form. It then stores the image from the form in that directory, and records the ID for the image. Because this is the first function, this version of the code requires an image upload. If we wanted to make the image upload optional, we could rewrite a few lines to decouple the two functions so that either could be run at the time of form submission.

In it’s current form, once the file has been stored the second function, sheetRowGenerator, is called. This function is adapted from Hawksey’s work. It takes all of the information from the form and the uploaded file and writes that information to a row in the Google Sheet that is attached to this Google Script. My version of the script looks at which section (house) of the class the student is in, and writes the data to the sheet for that section.

function sheetRowGenerator(firstName, lastName, title, blogText, house, assignment, assignmentText, id) {

    var lock = LockService.getPublicLock();
    lock.waitLock(30000);  // wait 30 seconds before conceding defeat.
 
try {
    // next set where we write the data - you could write to multiple/alternate destinations
    var doc = SpreadsheetApp.openById(SCRIPT_PROP.getProperty("key"));
    var sheet = doc.getSheetByName(house);
 
    // we'll assume header is in row 1 but you can override with header_row in GET/POST data
    var nextRow = sheet.getLastRow()+1; // get next row
    var row = []; 
    row.push(nextRow); //using the row number as an ID for the blog post
    row.push(new Date());
    row.push(firstName);
    row.push(lastName);
    row.push(title);
    row.push(blogText);
    row.push(house);
    row.push(assignment);
    row.push(assignmentText);
    row.push(id);
    // more efficient to set values as [] array than individually
    sheet.getRange(nextRow, 1, 1, row.length).setValues([row]);
    // return json success results
    return ContentService
          .createTextOutput(JSON.stringify({"result":"success", "row": nextRow}))
          .setMimeType(ContentService.MimeType.JSON);
  } catch(e){
    // if error return this
    return ContentService
          .createTextOutput(JSON.stringify({"result":"error", "error": e}))
          .setMimeType(ContentService.MimeType.JSON);
  } finally { //release lock
    lock.releaseLock();
  }
}

function setup() {
    var doc = SpreadsheetApp.getActiveSpreadsheet();
    SCRIPT_PROP.setProperty("key", doc.getId());

The outer pieces of this code, get the key and ID of the sheet that you’re working from. They pass this information into the inner functions which find the next blank row of that sheet, and then write the long list of variables to the sheet. The last variable pushed, the ‘id’ is actually the id of the image that the user uploaded. The image can be viewed if we plug the id in at the end of ‘https://drive.google.com/uc?export=view&id=’. I’ll return to this point in the next post when I walk through the code for the blog website.

So in sum, we have Agarwal’s code to store the uploaded image in our Google Drive account, and a modified version of Hawksey’s code to store the information for the blog post in a row of our Google Sheet. Next, let’s look at the Form itself.

Custom Google Form for Blogging

Above, I noted how the server.gs file calls the forms.html file to create the form. This forms.html file is a fairly typical html file, and I’ve included some css and js in mine to get it to look and act how I want. I’m sure that I could have separated these pieces out into other files. If I’d had more time, I’d have written a shorter form file, but this one works. You can see and fill out the demo form here.

Screen shot of a custom form created with Google Scripts.Custom form created with Google Scripts

Within the form itself, the two interesting pieces to me are that I’ve used Materialize CSS and Quill.js. Materialize was a carry over from Agarwal’s work. I like how the field labels react as you enter information, and the way that they offer visual cues to verify your inputs.

Quill was the more exciting find in this project. Quill JS is a jquery-based, rich text editor. It stores both your work and the history of changes in that work in a proprietary json format called a ‘delta.’ Here’s the js code to instantiate the Quill container:

      var quill = new Quill('#editor-container', {
        modules: {
          toolbar: [
            ['bold', 'italic'],
            ['link', 'blockquote', 'code-block'],
            [{ list: 'ordered' }, { list: 'bullet' }]
          ]
        },
        placeholder: 'Compose an epic...',
        theme: 'snow'
        });

You can get some notion of how Quill might be used as a Wiki here. Quill’s deltas are hard to work with, because you have to write the json parsing code to extract the html for display. They seem to want you to use Quill for both input and display. That would allow for an interesting post-publication editing interface, but it’s not what I had envisioned. Instead, I used a function called inner html that returns a json string of the html written by the user. I passed this json string into a cell of my Google Sheet and then passed it to my blog site for rendering.

When a user is done filling out the form and clicks submit, the form checks to see if all the fields were filled in and then passes the data back into server.gs. Sorry for the length of this bit of code. I think it could be optimized, and feel free to skip it if you don’t care. 

var file, 
          reader = new FileReader();

      reader.onloadend = function(e) {
        if (e.target.error != null) {
          showError("File " + file.name + " could not be read.");
          return;
        } else {
          var firstName = $('#first_name').val();
          var lastName = $('#last_name').val();
          var blogTitle = $('#blog_title').val();
          var blogText = quill.root.innerHTML;
          console.log(blogText);
          var house = $('#house').val();
          var assignment = $('#assignment').val();
          var assignmentText = $('#assignment').find(":selected").text();
          google.script.run
            .withSuccessHandler(showSuccess)
            .uploadFileToGoogleDrive(e.target.result, file.name, firstName, lastName, blogTitle, blogText, house, assignment, assignmentText);
        }
      };

      function showSuccess(e) {
        if (e === "OK") { 
          $('#forminner').hide();
          $('#success').show();
        } else {
          showError(e);
        }
      }

      function submitForm() {

        var files = $('#files')[0].files;
        var firstName = $('#first_name').val();
        var lastName = $('#last_name').val();
        var blogTitle = $('#blog_title').val();
        var house = $('#house').val();
        var assignment = $('#assignment').find(":selected").text();
        
        if (firstName.length === 0 || lastName.length === 0 || blogTitle.length === 0 || house.length === 0 || assignment.length === 0) {
          showError("Please fill out form completely");
          return;
        }
        
        if (files.length === 0) {
          showError("Please select a file to upload");
          return;
        }

        file = files[0];

        if (file.size > 1024 * 1024 * 2) {
          showError("The file size should be < 2 MB.");
          return;
        }

        showMessage("Uploading file..");

        reader.readAsDataURL(file);

      }

      function showError(e) {
        $('#progress').addClass('red-text').html(e);
      }

      function showMessage(e) {
        $('#progress').removeClass('red-text').html(e);
      }

You can see that the file size for uploads has been limited to 2MB. Generally, I don’t use large images for the web, to minimize load times. Also, we have 950 students, and we’re planning on as many as a dozen blog assignments. With 11,400 blog posts, you could quickly consume gigs of storage. 

Rather than running php or HTTP POST call to send the data to some form of storage, this file calls the uploadFileToGoogleDrive function and passes all of the collected data to that function as parameters.

Takeaways

Here are the steps for creating your own copy of this project and getting it running. I elided a few things in the walk through above, but (hopefully) this has all the steps:

  1. Create a copy of the Google Sheet I created.
  2. From within that sheet, you can click on Tools>Script Editor in the menu bar to access both the custom form and the Google script for this project.
  3. Within the Form, customize the list of Sections (lines 63-70) and Assignments (lines 77 & 78)
  4. If you add any fields or change any fields, make sure to update the functions starting on line 142 of the forms.html file to collect and pass the correct information from your form.
  5. Once you’re happy with how everything looks, click on the server.gs file, and select the Setup function from the ‘Select Function’ drop down menu. Hawksey wrote this function to help get your scripts authorized within your Google account and make sure the functions are properly linked to your Google Sheet.
  6.  Select ‘Publish>Deploy as Web app’ from the menu to make your project live. Set the Project version to ‘new’ and republish as new anytime you make a change to your code. Also set the ‘Execute the app as’ to ‘Me’ and the ‘Who has access to the app’ to ‘Anyone, even anonymous.’
  7. Once you hit publish (or update) you will get a Current web app URL for your form.
  8. From your Google Sheet, clear out any demo data you don’t want.
  9. Click on ‘File>Publish to the web’ from the menu to open access to your sheet to use it as a DB. Make sure the settings are set to ‘Entire Document’ and ‘Web Page.’ 

What I learned from this project is that you can host a form from Google Scripts, use it to build a (headless) database, and then call that database remotely for other websites. By hosting the script as a Web App (publish as Web App), you get a  secure, https URL that can collect data.

The quota limit that I’ve found is that we can’t exceed 100 form submissions per 100 seconds. I’m a little worried that we might also be limited to 500 images uploaded per day. If that is the case, I will rewrite the functions above to make the image upload optional or allow for image URL links as opposed to uploads.

Google Sheets provides a fairly easy and malleable interface for managing a CSV and can be publish the data as a JSON for easy access in other sites. In the next post, I’ll show how I call the sheet and display the data using jQuery.

Google Sheets Blogging CMS, part 1

This is the first post in a three part series on using Google Sheets as the database for  a blogging CMS. In this post, I’ll explain the motivations for building the system. In the second post, I’ll walk you through the Google Sheet itself and the Google scripts (their version of js) that drive it. In the third post, I’ll share the website that displays the blog, and the code behind it. My guess is that interest in the three pieces will vary for different audiences, so I wanted to encapsulate the descriptions.

At the beginning of the summer, I met with Prof. Jenel Cavazos about introducing blogs into her Psychology 1113 course. This freshman level introduction to psychology is one of the largest classes on campus with 900 students enrolled in a typical fall semester. Additional sections of PSY1113 taught by other instructors often enroll another 300 students. 

Screen shot of the OU Create landing page at https://create.ou.edu.
OU Create landing page

1200 student blogs in a single course is a lot. At OU, we have a Domain of One’s Own project, where every student, staff and faculty member can sign up for a web domain with 5GB of storage. OU Create currently has about 5000 users running more than 6000 websites in their domains. To support this, we work with Reclaim Hosting to run five virtual private servers on Digital Ocean. We spin up a new Droplet (VPS) for each 1000 users to spread the minimize the strain on the servers and make sure a server outage doesn’t take down the whole system.

Increasing our user base by 20% and spinning up a new VPS for a single class each semester would strain our resources, so we started brainstorming other options for how we could provide a blogging platform for the class.

Possible Blogging Options

For one of our architecture classes, we use a single course blog, and all of the 100-300 students login as users to blog in that one site. However, that number of users already strain the ability of a website to handle simultaneous logins and posts. With 900 students potentially logging in to submit posts before an assignment deadline, we didn’t think this option would be stable enough.

Simply using WordPress.com and other free blogging platforms would work and would move the traffic onto a distributed network of servers. However, it would be difficult to support the students with any technical issues that might come up. It would also be difficult for Prof. Cavazos and the other instructors to keep track of the 1200 URLs for their students.

We thought about having the students just write in Google Docs or other cloud based word processors. However, this option would sacrifice the open audience of blogging. One of the benefits we see in blogging is that it encourages students to move from an audience of one (the instructor) to a potentially vast audience. The meta cognitive task of thinking about audience changes what and how we write. It can also support a sense of efficacy for students in that they are publishing their work and contributing to the knowledge base of the web.

Our Blogging Interface

Since none of the off-the-shelf options quite fit our needs, we decided to try to build something new. The design constraints for the blogging project were:

  1. A simple user interface, both for the students and the instructors
  2. Don’t crash our servers or spend a ton of money standing up a new server for 1200 users
  3. The blog should be public facing
  4. There should be a commenting functionality

The easiest UX that we’ve come up with is front-end blogging from a form. A user simply fills out a form with their name, the title of their blog post, the text of their post, and an image for the post. There’s no login and no need to navigate the UX of WordPress or another blogging service.

We’ve played with this concept of front-end blogging for a while now. Alan Levine has built an open sourced tool called TRU Writer that even provides this type of front end interface on a WordPress site. My colleague, Keegan Long-Wheeler, has built similar form interfaces into several of his more recent faculty development websites. However, these front-end interfaces still rely on the WP database to handle the submissions, and thus they would both be potentially overwhelmed by 900 submissions when an assignment is due.

For my new system, I wanted to use a Google Form to replicate the form in Alan’s system but shift the burden of traffic onto more robust Google app servers. However, Google Forms don’t allow you to upload images or use a Rich Text Editor to write extended text blocks. This would mean our blog posts would have no links, lists, embeds, or anything else that makes them more than just a text file.

Instead of Google Forms, I decided to use Google Scripts to build my own custom form in HTML and push the information from the form into Google Sheets. Accepting file uploads and rich text and saving them to a Google Sheet was not easy (at least not for me), but I will share the code and details on how I built this in the next post in the series.

Custom form served by Google that posts form results to Google Sheets

One potential problem with front-end blogging is that anyone can fill out the form and potentially fill the site with spam. However, my custom form is served from a url that is not indexed by the Google Search engine and not linked from any public facing websites. It is however embedded in our learning management system for the class, Canvas. Thus students write their blog post in the LMS, and it shows up on the website.

Blog Site

The custom form provided the interface for writing blogs and storing the data in a Google Sheet. The next problem was displaying these posts as an actual blog site.

For this step, I used Google’s API engine to expose the Google Sheet data as a json file. I then called that json file into my website using jquery, and read the data into an index.js file. Then I parsed and paginated the data, and passed it into an html container for display on an index.html page. I copied this site architecture and tailored it for each of the 16 sections in the 900 person PSY1113 course. Thus we have 16 team websites, each displaying the blogposts for the 50-60 students in that team. Again, I’ll give a walk through of the actual code in the third post in this series. For now, here’s a screen shot of the Team Ainsworth site (each of the teams is named after a noted psychologist) and a second screen shot depicting how demo data is rendered as blog entries.

Screen shot of the Team Ainsworth website
Screen shot of the Team Ainsworth site. Hopefully, by the time you’re reading this, students will have begun to populate the site with posts.
A screenshot demonstration of how placeholder blog post entries are displayed using jQuery
Placeholder blog post entries are displayed using jQuery

From the index page for each site, you can click on the link for an individual blog post to read that post. These links actually all point to a single blog post page. However there’s a blog post ID “parameter” attached to the url for this page that tells the system which row of data from the Google sheet to display for the post. Javascript within the blog post page pulls that data and then uses it to build the display for the particular blog post you want to read. The third party commenting service, Disqus, provides a commenting interface to leave feedback on the given “blog post” and keeps the comments separate using the blog post ID parameter.

In addition to the public facing site, this build also provides the instructors for the course with the Google Sheet itself as a space for reading student blog posts. You could filter the sheet to see the contributions of a certain student or all contributions submitted in a given timeframe. You could graph the number of blog posts or number of words that meet a given criteria and you could easily export the text set for text analysis. You could also use the json for the entire blog set to create alternative views and visualizations that highlight linkages and themes. 

Potential Applications

I think this Blog via Sheets tool is going to work well for the 900+ students in PSY1113 because it 1) is easy to use, 2) won’t crash, 3) presents their work on the open web, and 4) has commenting.

I could see this tool being used by anyone who wants a fairly inexpensive blogging platform for 500-2000(ish) people. A high school or college could tailor the code and tweak the css to spin up their own blogging platform. I could see an office like ours (Office of Digital Learning) using this to share our work both internally and with the broader digital learning community. I could also see Personal Learning Networks (PLN’s) using this type of interface to create ad-hoc blog communities. 

Twine Game Data to Google Sheets via Javascript

Using Twine, a free, open-source, text-based game software, you can build choose your own adventure games that explore the untaken paths in literaturepromote empathy through simulated experiences, and provide role-playing adventures in historical scenarios. Twine games are often used in the classroom, because you can quickly build an educational experience about whatever subject you choose. They are also heavily used in the interactive fiction world as a medium for short stories and novels.

In the XP Twine workshop that Keegan and I are leading, several of the faculty members asked how Twine games could be used to track students’ understanding of concepts. One faculty member is building a game that simulates entrepreneurial business investment. In the game, students can try out different investment strategies and equity stakes as they try to become a successful venture capitalist. The professor wanted to be able to track the choices they made in game in order to spur in class discussion.

Twine games take the form of HTML files with embedded CSS and JS. In my latest round of tinkering, I figured out how to use javascript within a Twine game to send an HTTP post message to pass game-play data to a Google Spreadsheet, thereby creating a database that records each game-play.

Google Sheet/Apps Script Code

In order to track this game data, I suggested that we push the data from Twine to a Google Spreadsheet. Following the lead of Tom Woodward, I’ve found that Google Spreadsheets are a relatively easy place to collect and analyze data. I wanted to use Google Scripts, which are mostly javascript and a few custom functions, to receive data and parse it into the cells of the Google Sheet.

Martin Hawksey wrote a blog post a few years ago called “Google Sheets as a Database – INSERT with Apps Script using POST/GET methods (with ajax example).” Martin had set up an Ajax form that could be embedded in any website that would pass data to his Google Script which would then record it in his Google Sheet. Martin’s code (below) receives an HTTP Get or Post call generated by an Ajax form, parses the parameters of that HTTP call, and stores those parameters in a Google Sheet. Martin also provides comments in his code to help users customize the Google script and initiate it as a Web App.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
//  1. Enter sheet name where data is to be written below
        var SHEET_NAME = "DATA";
 
//  2. Run &gt; setup
//
//  3. Publish &gt; Deploy as web app 
//    - enter Project Version name and click 'Save New Version' 
//    - set security level and enable service (most likely execute as 'me' and access 'anyone, even anonymously) 
//
//  4. Copy the 'Current web app URL' and post this in your form/script action 
//
//  5. Insert column names on your destination sheet matching the parameter names of the data you are passing in (exactly matching case)
 
var SCRIPT_PROP = PropertiesService.getScriptProperties(); // new property service
 
// If you don't want to expose either GET or POST methods you can comment out the appropriate function
function doGet(e){
  return handleResponse(e);
}
 
function doPost(e){
  return handleResponse(e);
}
 
function handleResponse(e) {
  // shortly after my original solution Google announced the LockService[1]
  // this prevents concurrent access overwritting data
  // [1] http://googleappsdeveloper.blogspot.co.uk/2011/10/concurrency-and-google-apps-script.html
  // we want a public lock, one that locks for all invocations
  var lock = LockService.getPublicLock();
  lock.waitLock(30000);  // wait 30 seconds before conceding defeat.
 
  try {
    // next set where we write the data - you could write to multiple/alternate destinations
    var doc = SpreadsheetApp.openById(SCRIPT_PROP.getProperty("key"));
    var sheet = doc.getSheetByName(SHEET_NAME);
 
    // we'll assume header is in row 1 but you can override with header_row in GET/POST data
    //var headRow = e.parameter.header_row || 1; Hawksey's code parsed parameter data
    var postData = e.postData.contents; //my code uses postData instead
    var data = JSON.parse(postData); //parse the postData from JSON
    var headers = sheet.getRange(1, 1, 1, sheet.getLastColumn()).getValues()[0];
    var nextRow = sheet.getLastRow()+1; // get next row
    var row = []; 
    // loop through the header columns
    for (i in headers){
      if (headers[i] == "Timestamp"){ // special case if you include a 'Timestamp' column
        row.push(new Date());
      } else { // else use header name to get data
        row.push(data[headers[i]]);
      }
    }
    // more efficient to set values as [][] array than individually
    sheet.getRange(nextRow, 1, 1, row.length).setValues([row]);
    // return json success results
    return ContentService
          .createTextOutput(JSON.stringify({"result":"success", "row": nextRow}))
          .setMimeType(ContentService.MimeType.JSON);
  } catch(e){
    // if error return this
    return ContentService
          .createTextOutput(JSON.stringify({"result":"error", "error": e}))
          .setMimeType(ContentService.MimeType.JSON);
  } finally { //release lock
    lock.releaseLock();
  }
}
 
function setup() {
    var doc = SpreadsheetApp.getActiveSpreadsheet();
    SCRIPT_PROP.setProperty("key", doc.getId());
}

I edited Martin’s original code in lines 39-41. In his code, he’s looking for Post data in a slightly different format than what I generate. Rather than using the parameters from the HTTP Post, my code uses the data from the Post.

Twine Game Code (see newer version)

*I’ve posted an updated version of this blog post. The code below will run into a CORS problem and won’t work. Read the new post to see the correct version.*

Rather than using an Ajax form, I wanted to pass variables that had been collected during gameplay in a Twine game. Twine is built on javascript, so I decided to replace Martin’s Ajax form with a javascript HTTP Post function embedded in Twine. Based on research on how Twine works, I decided that the best way to do this would be to write the javascript code directly into a Twine game passage. My passage, called PostData, would presumably come at or very near the end of my game after all interesting variables have been set:

Twine Screen ShotI wrapped Twine’s script syntax <script></script> around the standard xhr XMLHttpRequest() function. This sends an HTTP Post to whatever url is provided (like the url for your Google Script Web App) with a json package defined in the var data line. Here’s this code (please note that you need to add the <script></script> modifiers in Twine:

1
2
3
4
5
6
var xhr = new XMLHttpRequest();
var url = "URL for the Google App";
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-type", "application/json; charset=UTF-8");
var data = JSON.stringify({"var1": harlowe.State.variables['var1'], "var2": harlowe.State.variables['var2'], "var3": harlowe.State.variables['var3']});
xhr.send(data);

However, in order to pull variables out of the Harlowe version of Twine that I was using, I also needed to add the following code by editing the Story Javascript:

Twine Javascript Screen Shot

This bit of Javascript passes all variables defined within the Twine game into an array (window.harlowe) that is accessible by Javascript code that is embedded in the game. Here’s the code in case you want to try this out:

1
2
3
if (!window.harlowe){
	window.harlowe = {"State": State};
}

I hope this work will be useful in studying any Twine game to see how players are moving through the game. You could record any variables in the game and also the games ‘history’ to see which passages each player went through. This has obvious uses for educational games in being able to provide feedback to players, but it also has implications for game design more broadly with the increased use metrics.

Implement in your own game

In order to implement this for your own game, I would suggest following these steps:

  1. Copy the Javascript code above (starts with if (!window)) into your Twine game’s Javascript panel
  2. Copy the PostData code above and paste it into a TwinePost passage towards the end of your game
  3. Then replace the variables in the TwinePost passage so that harlowe.State.variables[‘var1’] becomes harlowe.State.variables[‘your variable name here’] for each of the variables you want to track
  4. Click this link to get a copy of my Google Spreadsheet
  5. Make sure the column headers in the spreadsheet match your variable names from the TwinePost passage
  6. In the Google Sheet, click on Tools->Script Editor and follow Martin Hawksey’s instructions for steps 2-5
  7. When you publish your Script as a Web App, it will give you a URL for the Web App. Copy this URL and paste it into the URL variable in your TwinePost passage code.
  8. You’re done. Play your game and see if everything works. If it doesn’t work, tweet at Tom Woodward. He’s good at fixing code and has nothing but free time on his hands.

I am excited about this code because it answers a question for several of our faculty members and makes Twine games more useful as formative assessments. Hawksey did an excellent job in keeping his code very generalized, and I’ve tried to preserve that, so that you can track whatever variables you want.

You could also use the HTTP Post javascript code outside of Twine in any other web site or web app to pass information to your Google Sheet. Tom has blogged a couple of times about using code to send data to Google Forms and autosubmitting into a Google Spreadsheet. I think the process described above denecessitates that Google Form pass through and moves us a step closer to Google sheets as a no-SQL data base alternative.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

css.php