This is an example of how to read a csv file retrieved from an AWS S3 bucket as a data source for a D3 javascript visualization.
The D3 visualization would be an HTML document hosted on a web server.
You will use the AWS SDK to get the csv file from the S3 bucket and so you need to have an AWS S3 bucket key and secret but I won’t cover that in this post.
The key point of this post is to highlight that the bucket.getObject function data is read into D3 using d3.csv.parse(data.Body.toString());
Another note is that d3.csv.parse is for D3 version 3. Older versions use d3.csvParse.
Once implemented, whenever the webpage is refreshed it retrieves latest csv file from the S3 bucket and the D3 visualization is updated.
<script src="https://sdk.amazonaws.com/js/aws-sdk-2.6.3.min.js"></script> <script type="text/javascript"> // aws key and secret (note these should be retrieved from server not put as plain text into html code) AWS.config.accessKeyId = 'xxxxxxxxxxxxxxxxxxxxxxx'; AWS.config.secretAccessKey = 'xxxxxxxxxxxxxxxxxxxxxxx'; AWS.config.region = 'us-east-1'; // create the AWS.Request object var bucket = new AWS.S3(); // use AWS SDK getobject to retrieve csv file bucket.getObject({ Bucket: 'my-S3-bucket', Key: 'myfile.csv' }, // function to use the data retrieve function awsDataFile(error, data) { if (error) { return console.log(error); } // this where magic happens using d3.csv.parse to read myCSVdata.Body.toString() myCSVdata = d3.csv.parse(data.Body.toString()); // now loop through data and get fields desired for visualization var counter = 0; myCSVdata.forEach(function(d) { d.field1= +d.field1; d.field2= +d.field2; countLoop = counter++; }); // now you can create rest of D3 vizualization here // for example like this example https://gist.github.com/d3noob/4414436 my D3 vizualization code here // this closes bucket.getObject }); </script>