Testing The Core
Growing the PHP Core
Hack Your Home With a Pi
The Workshop: PHP Puzzles: PSR Pickup:
Accept Testing with Making Some Change PSR 12 - Extending Codeception Coding Style Standards
Security Corner: Education Station: Drupal Dab: Operational Security Which License to V9 Intro and Install Choose? DDD Alley: finally{}: When the New Requirement Arrives Tech is Taking Sides
Learn how a Grumpy Programmer approaches improving his own codebase, including all of the tools used and why.
The Complementary PHP Testing Tools Cookbook is Chris Hartjes’ way to try and provide additional tools to PHP programmers who already have experience writing tests but want to improve. He believes that by learning the skills (both technical and core) surrounding testing you will be able to write tests using almost any testing framework and almost any PHP application.
Available in Print+Digital and Digital Editions.
Order Your Copy
phpa.me/grumpy-cookbook
April 2022 Volume 21 - Issue 4
2 Impacting the World Through Tech Eric Van Johnson 3 Growing the PHP Core - One Test at a Time Florian Engelhardt 9 How to Hack your Home with a Raspberry Pi - Part 4 Ken Marks 23 Which License to Choose? Education Station Chris Tankersley 28 Operational Security Security Corner
Eric Mann
30 Accept Testing with Codeception The Workshop Joe Ferguson
35 When the New Requirement Arrives DDD Alley Edward Barnard
39 Drupal 9 - Introduction and Installation Drupal Dab Nicola Pignatelli
45 Making Some Change PHP Puzzles Oscar Merida 49 New and Noteworthy 50 PSR 12 Extended Coding Style Standard PSR Pickup Frank Wallen
52 Tech is Taking Sides finally{} Beth Tucker Long
Copyright © 2022—PHP Architect, LLC All Rights Reserved
Although all possible care has been placed in assuring the accuracy of the contents of this magazine, including all associated source code, listings and figures, the publisher assumes no responsibilities with regards of use of the information contained herein or in all associated material.
php[architect], php[a], the php[architect] logo, PHP Architect, LLC and the PHP Architect, LLC logo are trademarks of PHP Architect, LLC.
Edited with peace and love
php[architect] is published twelve times a year by:
PHP Architect, LLC 9245 Twin Trails Dr #720503 San Diego, CA 92129, USA
Subscriptions Print, digital, and corporate subscriptions are available. Visit https://www.phparch.com/magazine to subscribe or email contact@phparch com for more
Advertising To learn about advertising and receive the full prospectus, contact us at ads@phparch.com today!
Contact Information: General mailbox: contact@phparch.com Editorial: editors@phparch.com
Print ISSN 1709-7169 Digital ISSN 2375-3544
Impacting the World Through Tech
Eric Van Johnson
If you listen to my PHPUgly podcast, you might know that we have a policy of not mixing politics with our tech talk. That is a policy we have been breaking more and more often over the past couple of years.
From the previous presidential term and through the election, a glaring topic that started to surface more and more was that our industry and technology were playing a more significant role in politics, local and worldwide. For better or worse, it has become how people get their news, shape their opinions, and set the stage for how people are perceived. Speaking about technology and how it plays into events like the Russian invasion of Ukraine is unavoidable. Tech companies and services also know they can no longer avoid how their products and offerings might be contributing to atrocities that they do not want to be a part of, and they have begun to take action. In this April edition of finally{}, Beth Tucker Long shares her thoughts in Tech is Taking Sides. We get Part Four of Ken Marks’ Raspberry Pi series, where he discusses web API and plotting data. This series has been such a fantastic read. If you haven’t been keeping up, go back to the beginning of the year and read the previous three parts. Ken has been doing a terrific job explaining the journey of his real-world solution that he created using a Raspberry Pi and PHP. Also, congratulations to our second winner of the Raspberry Pi giveaway, Miguel Gallegos. Florian Engelhardt contributed a column this month on a topic matter I had been wanting for a long time called Growing the PHP Core One Test at a Time. Florian walks you through the PHP core codebase and how the tests work. I couldn’t wait to step through this article. He steps you through cloning the PHP codebase and through the testing workflow. This month, we introduce a new column called Drupal Dab, contributed by Nicola Pignatelli. Drupal 9 Introduction and Instal_lation—thetitleprettymuchcoversthetopic._
He walks us through how to install Drupal and some of the first things to do after you install it. In this month’s Education Station, Chris Tankersley takes us down the path of licenses with some thoughts on choosing one for our project in his article Which License to Choose? He discusses the pros and cons of several open-source licenses and explains the benefits and drawbacks. In Security Corner, Eric Mann discusses Operational Security. He touches on what happens when disaster strikes, learning from mistakes, best practices, and the ongoing quest for security. Next to time, one of the more frustrating areas to code is money, and in this month’s PHP Puzzles Making Some Change, Oscar Merida goes over the challenge of making change. He also shows some solutions to last month’s challenge on the best ways to make change. We’ve all heard the excuses for not having tests, tests are “confusing”, “difficult”, “takes too long to write”, or are just “complicated.” There’s also a saying, “any tests are better than no test.” In this month, The Workshop, Joe Ferguson goes over one of the easiest ways to get some basic tests in your project with his article Accept Testing with Codeception. I have personally been a huge fan of Edward Barnard’s new DDD Alley column. This month he continues the series with When the New Requirements Arrive, where he talks about what you do when new requirements for a codebase are introduced and how to handle them. He touches on the solid principle, bloated classes, test boundaries, and more. Frank Wallen continues his new column, PSR Pickup, where he moves on to PSR 12 Extended Coding Style Standard and discusses what this PSR is and why you might want to use it in your projects.
Write For Us
If you would like to contribute, contact us, and one of our editors will be happy to help you hone your idea and turn it into a beauti- ful article for our magazine. Visit https://phpa.me/write or contact our editorial team at write@phparch.com and get started!
Stay in Touch
Don’t miss out on conference, book, and special announcments. Make sure you’re connected with us.
- Subscribe to our list:
- Facebook:
Download the Code
Archive:
Growing the PHP Core - One Test at a Time
Florian Engelhardt
In September 2000, I started my vocational training at an internet agency with two teams: one doing JavaServer Pages and one doing PHP. I was assigned to the PHP team, and when presented with the language, I immediately knew that no one would ever use it. I was wrong. Today, my entire career is built on PHP. It’s time to give back to the community by writing tests for the PHP core itself!
Prepare Your Machine!
Before you start writing tests for PHP, let’s start with running the tests that already exist. Fetch the PHP source from GitHub and compile it to do so.
$ git clone git@github.com:php/php-src.git
$ cd php-src
$ ./buildconf
$ ./configure --with-zlib
$ make -j `nproc`
I recommend creating a fork upfront because it’s easier to create a pull request when you finish your test. If you do not have a compiler and build tools already installed on your Linux computer, install the
development-tools group on Fedora or the build-essential
on Debian Linux. The ./configure command may exit with
an error condition; this usually occurs when build requirements are not met, so install whatever ./configure is missing
and rerun that step. Keep in mind that you need to install
the development packages — in my case, the configure script
stated it was missing libxml, which was, in fact, installed.
It was really missing the header files in the development
package (usually named with a dev or devel suffix). Note that
the —with-zlib argument to ./configure is mandatory only in
this case, as you will need the zlib extension (which is not
built by default) for the following examples.
After your build is complete, you can find the PHP binary
in ./sapi/cli/php. Go ahead and check what you just created:
$ ./sapi/cli/php –v
PHP 8.2.0-dev (cli) (built: Dec 28 2021 19:43:57) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.0-dev, Copyright (c) Zend Technologies
Now that you have a freshly built and running PHP binary, you can finally run the tests included within the GitHub repository. Since PHP 7.4 tests may run in parallel, you can give the number of parallel jobs with the -j argument.
$ make TEST_PHP_ARGS=-j`nproc` test
============================================
TEST RESULT SUMMARY
-------------------------------------------Exts skipped : 46
Exts tested : 26
-------------------------------------------Number of tests : 17124 11774
Tests skipped : 5350 ( 31.2%) -------Tests warned : 1 ( 0.0%) ( 0.0%)
Tests failed : 2 ( 0.0%) ( 0.0%)
Expected fail : 28 ( 0.2%) ( 0.2%)
Tests passed : 11743 ( 68.6%) ( 99.7%)
-------------------------------------------Time taken : 40 seconds
============================================
This output looks good, and it only took 40 seconds to run 11774 tests — quite fast. Only one warning and two failures, hooray! You can also see that 5350 tests have been skipped, and 28 have been expected to fail. You will learn about why this is in the next part.
What Do Tests Look Like?
Now that we know how to execute the tests, let’s look at a
test case. PHP test files have the file ending .phpt and consist
of several sections, three of which are mandatory: the TEST,
the FILE, and the EXPECT or EXPECTF section.
--TEST-strlen() function
--FILE-<?php
var_dump(strlen('Hello World!'));
?>
--EXPECT-int(12)
###### The TEST Section
This section is a short description of what you are testing in a single line. It must be as short as possible, as the section is printed when running the tests. You can put additional details in the DESCRIPTION section.
The DESCRIPTION Section
If your test requires more than a single line to describe it adequately, you can use this section for further explanation. This section is completely ignored by the test binary.
The FILE Section
This section is the actual test case, PHP code enclosed by PHP tags. It is where you do whatever you want to test. As you have to create output to match against your expectation, you usually find var_dump all over the place.
The EXPECT Section Figure 1.
This section must exactly match the output from the executed code in the
FILE section to pass.
###### The EXPECTF Section
EXPECTF can be used as an alterna
tive to the EXPECT section and allows
the usage of substitution tags you may
know from the printf functions family:
--EXPECTF-int(%d)
###### The EXTENSIONS Section
You should list all the PHP extensions that your test case depends on in this optional section. The PHP test runner will try to load those extensions, and if they are not available, skip the test. If you depend on multiple extensions, the list must be separated by a newline character. In our case, we will list the zlib extension.
The SKIPIF Section
This section is optional and is parsed and executed by the PHP test runner. The test case is skipped if the resulting output starts with the word skip. If it starts with xfail the test case is run, but it is expected to fail.
The XFAIL Section
XFAIL is another optional section. If present, the test case is
expected to fail; you don’t need to echo out xfail in the SKIPIF section at all if you have this. It contains a description of why this test case is expected to fail. This feature is mainly used when the test is already finished, but the implementation isn’t yet, or for upstream bugs. It usually contains a link to where a discussion about this topic can be found.
The CLEAN Section
This section exists so you can clean up after yourself. An
example might be the temporary files you created during
the test. Keep in mind this section’s code is executed independently from the FILES section, so you have no access to
variables declared over there. Also, this section is run regardless of the outcome of the test.
You may find more in-depth details on the file format at the PHP Quality Assurance Team Web Page[1].
What Can You Test?
Now that you know what a PHP test looks like, it is time to find something to test. Head over to Azure Pipelines[2], click on the latest scheduled run (look for the calendar icon and the string “Scheduled for”), and then click “Code Coverage” to view the code coverage report.
When I started looking for something to test, I found that the zlib_get_coding_type() function in ext/zlib/zlib.c was not covered at all (this was back in the PHP 7.1 branch and on the now-retired gcov.php.net[3] . See Figure 1. The next step was to check out what this function was supposed to do in the PHP Documentation[4]. In the documentation and the code itself, I saw that this function returns the string gzip or deflate or the Boolean value false. The linked zlib.output_compression directive documentation[5] gave one additional bit of information: the zlib output compression feature reacts on the HTTP Accept-Encoding header sent with the client HTTP request.
The Big Picture
In an HTTP request, a client can indicate via the Accept-En``` coding header that it would understand a compressed response
using the specified compression algorithms. Additionally, if
the PHP directive `zlib.output_compression is enabled, PHP`
magically compresses the output generated by your script
before sending it to the client, saving bandwidth.
_1_ _[Web Page: https://qa.php.net/phpt_details.php](https://qa.php.net/phpt_details.php)_
_2_ _Azure Pipelines:_
_[https://dev.azure.com/phpazuredevops/PHP/_build?definitionId=1](https://dev.azure.com/phpazuredevops/PHP/_build?definitionId=1)_
_3_ _gcov.php.net:_
_[http://gcov.php.net/PHP_7_1/lcov_html/ext/zlib/zlib.c.gcov.php#543](http://gcov.php.net/PHP_7_1/lcov_html/ext/zlib/zlib.c.gcov.php#543)_
_4_ _[PHP Documentation: https://www.php.net/zlib-get-coding-type](https://www.php.net/zlib-get-coding-type)_
_5_ _zlib.output_compression directive documentation:_
_[https://www.php.net/zlib.configuration#ini.zlib.output-compression](https://www.php.net/zlib.configuration#ini.zlib.output-compression)_
Figure 1.
-----
###### Test cases
For the test, this means there are four possible cases to
check for:
- the absence of the Accept-Encoding header
- the Accept-Encoding being gzip
- the Accept-Encoding being deflate
- the Accept-Encoding being anything else
The last case is treated the same as the first case: The function is expected to return the Boolean value false.
###### Time To Write That Test!
Let’s start with the first test for the case where
there is no `Accept-Encoding header set in the file`
test/zlib_get_coding_type.phpt.
Listing 1.
- —TEST- 2. zlib_get_coding_type()
- —EXTENSIONS- 4. zlib
- —FILE- 6. <?php
- ini_set(‘zlib.output_compression’, ‘Off’);
- var_dump(zlib_get_coding_type());
- ini_set(‘zlib.output_compression’, ‘On’);
- var_dump(zlib_get_coding_type());
- ?>
- —EXPECT-13. bool(false)
- bool(false)
You can run this single test via:
$ make test TESTS=test/zlib_get_coding_type.phpt
Sadly, this gives you a failed test. You can easily see why
this is the case by checking the test directory contents where
you find your .phpt test file and other files with various file
endings. One that is particularly interesting, in this case, is
the zlib_get_coding_type.log file:
---- EXPECTED OUTPUT bool(false) bool(false) ---- ACTUAL OUTPUT bool(false) Warning: ini_set(): Cannot change zlib.output_compression
- headers already sent in test/zlib_get_coding_type.php
on line 4
bool(false) ---- FAILED
Don’t let these failed tests get you down if this is your
first time learning about PHP internals. These failed tests
are learning opportunities. For example, in this case you
will learn that you cannot change the output compression
setting after your script creates any form of output. For
output compression to work, there is a handler that is called
when we change the zlib.output_compression directive. This
handler checks to see if any output has already been sent,
and if it has, sends a warning.
Let’s dive deeper. Try searching for “zlib.output_compression” in the ext/zlib/zlib.c source code file. You find it in the
call to the STD_PHP_INI_BOOLEAN macro along with the pointer
to the function `OnUpdate_zlib_output_compression in which`
you can spot the warning you received.
Thinking of the big picture of the feature, this totally makes
sense. When the script creates any form of output, PHP
will create and send the HTTP response headers indicating
whether the response will be compressed and, if so, what
algorithm was used We can not toggle this feature anymore
as there is no way of telling the client that part of the response
is compressed and another part is not.
To work around this warning, you can change the `FILE`
section to only create output after the calls to ini_set:
Rerunning the test case results in your first successful test!
🎉
It’s not time to party yet; there are still other cases to cover.
The next one on the list is the case with the HTTP Accept-En```
coding header set to the string gzip. As a PHP developer,
you know you can access and change the HTTP headers via
the super global $_SERVER. Let’s change the
FILE and
EXPECT
section accordingly as shown in Listing 2.
Listing 2.
1. --FILE- 2. <?php
3. ini_set('zlib.output_compression', 'Off');
4. $off = zlib_get_coding_type();
5. ini_set('zlib.output_compression', 'On');
6. $on = zlib_get_coding_type();
7. $_SERVER['HTTP_ACCEPT_ENCODING'] = 'gzip';
8. $gzip = zlib_get_coding_type();
9. var_dump($off);
10. var_dump($on);
11. var_dump($gzip);
12. ?>
13. --EXPECT-14. bool(false)
15. bool(false)
16. string(4) "gzip"
Now run the test via:
$ make test TESTS=test/zlib_get_coding_type.phpt
The test failed again. Looking into the zlib_get_coding_type.
log file, you notice that the third call to the zlib_get_coding_
type() function returned false, not the expected string gzip.
It seems like PHP is not reacting to the HTTP header that
you clearly set just one line before. Did we spot a bug in PHP?
The source code tells you: Open the ext/zlib/zlib.c source
code file and search for the variable compression_coding as
this is the variable being evaluated in the function you are
testing. You should find some matches, but there is one at
the beginning of the file in the C function php_zlib_output_
encoding, which looks like the only place where something is
assigned to the variable in question.
Analyzing the source code, not understanding exactly what
is going on, and finally asking for help on the PHP Community
Chat[6] reveals the following: All of your userspace variables
and PHP user-accessible autoglobals ($_GET, $_SERVER, …)
are copy-on-write. Altering those from your script creates a
copy for you to work on in userspace, effectively making the
HTTP request headers immutable to the userspace. The
PHP core continues to use the original, unaltered version.
Now that you know you can not alter or set the HTTP
header from within your script to influence PHP’s behavior,
this might look like a sad end for your tests code coverage.
Wait, there is more! I did not tell you about another
section in the PHPT file format that might come in handy
now: The ENV section. This section is used to give environ-
ment variables to your script that are passed to the PHP
process that runs your test code—meaning you can continue
writing tests and that you have to create a test file per test
case. Let’s create a new test file for the gzip case and name it
zlib_get_coding_type_gzip.phpt.
Listing 4.
1. --TEST- 2. zlib_get_coding_type() is gzip
3. --EXTENSIONS- 4. zlib
5. --ENV- 6. HTTP_ACCEPT_ENCODING=gzip
7. --FILE- 8. <?php
9. ini_set('zlib.output_compression', 'Off');
10. $off = zlib_get_coding_type();
11. ini_set('zlib.output_compression', 'On');
12. $gzip = zlib_get_coding_type();
13. ini_set('zlib.output_compression', 'Off');
14. var_dump($off);
15. var_dump($gzip);
16. ?>
17. --EXPECT-18. bool(false)
19. string(4) "gzip"
Run the test and see it fails once more. You might have expected this by now. 😝 Let’s see what happened by looking into the
zlib_get_coding_type_gzip.log file.
---- EXPECTED OUTPUT
bool(false)
string(4) "gzip"
---- ACTUAL OUTPUT
�1�0 ���B�P�Txꆘ8`5ؑ������s�7a�ze��B+���ϑ�,����^�
---- FAILED
Wow, this looks like some binary garbage, and yeah, this
does not match the expected output.
Look at your test case. You told PHP that you accept gzip
encoding and you turned on output compression. PHP is
basically just doing what we told it to do. The binary garbage
you see here is gzip encoded data. This means you succeeded
in activating gzip output compression, but forgot to turn it off
before dumping out the two variables. Now, all that’s left is to
add another call to deactivate output compression again so
the final test looks similar to Listing 4.
Rerun this test, and it finally passes! 🎊🥳🎉
Now you’re left writing one test case for the Accept-En-
coding header being set to deflate and another test case for the
Accept-Encoding header being set to an invalid string (basi
cally something that is not gzip or deflate). From this point on, finishing the test cases is just diligence, and if you like to cheat, you can find the tests in the ext/zlib/tests/ directory named:
-
zlib_get_coding_type_basic.phpt
-
zlib_get_coding_type_br.phpt
-
zlib_get_coding_type_deflate.phpt
-
zlib_get_coding_type_gzip.phpt
Listing 3.
1. --TEST- 2. zlib_get_coding_type() is gzip
3. --EXTENSIONS- 4. zlib
5. --ENV- 6. HTTP_ACCEPT_ENCODING=gzip
7. --FILE- 8. <?php
9. ini_set('zlib.output_compression', 'Off');
10. $off = zlib_get_coding_type();
11. ini_set('zlib.output_compression', 'On');
12. $gzip = zlib_get_coding_type();
13. var_dump($on);
14. var_dump($gzip);
15. ?>
16. --EXPECT-17. bool(false)
18. string(4) "gzip"
6 PHP Community Chat: https://phpc.chat
Collect Some Evidence!
Now that you have tests for all four cases, you could hope
you covered that function with tests or (a way better option if
you ask me), you could continue and create a code coverage
report! For the report to work, you need to install the lcov
tool via your Linux package manager (sudo dnf install lcov
for Fedora or sudo apt-get install lcov for Debian). PHP
was built with gcov disabled, so we need to run configure
again, with enabled gcov and recompile.
$ make clean
$ CCACHE_DISABLE=1 ./configure --with-zlib --enable-gcov
$ make -j `nproc`
This code compiles the PHP binary with enabled code coverage generation. Lcov will be used to create an HTML code coverage report afterward. To create your code coverage report, you need to rerun the tests.
$ make test TESTS=test/zlib_get_coding_type.phpt
While running the tests, gcov generates coverage files with a gcda file ending for every source code file. To generate the HTML code coverage report, run the following:
$ make lcov
You may find the resulting HTML code coverage report in
lcov_html/index.html.
It looks like we covered all four cases as shown in Figure 2.
What’s in It for You?
PHP becomes more stable and reliable with every test you write, as functions may not suddenly change behavior between releases. Writing tests for PHP gives you a deeper understanding of how your favorite language works internally. In fact, by reading up to this point, you may have learned that HTTP request headers are immutable to userspace, that there is such a thing as userspace, and that there are handler functions called to check what you are doing when you want to change ini settings at runtime.
Also, knowing the PHPT file format gives you other benefits as well: PHPUnit[7] not only supports the PHPT file format and can execute those tests, part of PHPUnit is tested with PHPT test cases. This skill led me to become a PHPUnit contributor: Writing PHPUnit tests in the PHPT file format.
Shout Out!
I would like to thank everyone involved in the PHP TestFest[8], especially Ben Ramsey[9] and everyone else involved in the organization of the 2017 edition. This was what made me write tests for PHP and made me not only a contributor to PHP but also a PHPUnit contributor. In the long run, this brought me my first ElePHPant 🐘!
Florian is a father of five, developer, architect, tech lead, geek and still sometimes finds time to contribute to open source. He started writing software with PHP in September 2000 and will try to convince you to use vim. @realFlowControl
7 PHPUnit: https://phpunit.de/
8 PHP TestFest: https://phptestfest.org/start/#whos-behind-this
9 Ben Ramsey: https://twitter.com/ramsey
SECRET WEAPON!
Exception, uptime, and cron monitoring, all in one place and easily installed in your web app. Deploy with confidence and be your team’s devops hero.
Are exceptions all that keep you up at night?
Honeybadger gives you full confidence in the health of your production systems.
DevOps monitoring, for developers. gasp!
Deploying web applications at scale is easier than it has ever been, but monitoring them is hard, and it’s easy to lose sight of your users. Honeybadger simplifies your production stack by combining three of the most common types of monitoring into a single, easy to use platform.
Exception Monitoring Uptime Monitoring Check-In Monitoring Delight your users by Know when your external Know when your background proactively monitoring for services go down or have jobs and services go missing and fixing errors. other problems. or silently fail.
Start Your Free Trial Today
How to Hack your Home with a Raspberry Pi - Part 4
Ken Marks
Welcome back to another installment of How to Hack your Home with a Raspberry Pi*. At the end of this article, you will have an accelerometer data plotting application running on your Raspberry Pi. You will need to start from the beginning of the series if you want to build this project yourself.
Introduction
Here’s what we’ve covered so far in parts 1 thru 3:
-
Part 1:
-
My motivation for this project
-
Components needed
-
Installing the OS on the Pi
-
Configuring the Pi
-
-
Part 2:
-
Updating the software packages on your Pi
-
Installing and testing the LAMP Stack
-
-
Part 3:
-
Connecting the accelerometer
-
Creating the database for storing accelerometer data
-
Compiling a C/C++ program that reads accelerometer data and logs it to the database
-
Creating a Unix Service to automatically start and stop logging In this installment we will:
-
-
build a web service that uses the logged accelerometer data
-
and build an application that plots the accelerometer data
Power Up Your Pi and Remotely Log in
Ensuring your accelerometer is properly connected to your Raspberry Pi (see part 3), connect the power supply to your Pi and plug it into A/C mains.
Ssh Into Your Pi
After a few minutes of letting the Pi power-up, bring up a terminal window on your computer. Next, SSH into your Pi with the user pi by typing the following command into the terminal:
ssh pi@raspberrypi.local
Enter your password. You should now be logged in to your Raspberry Pi.
Update the Packages
I’ll first check to see if any software packages need upgrading on the Pi using the apt package manager and run the following two commands in the terminal window:
sudo apt update
sudo apt upgrade
###### Building a Web Service
We ultimately want to build a web application that plots the accelerometer data in real time. So first, let’s create a simple REST service with a single endpoint that provides the following two mechanisms for reading the accelerometer data:
-
Getting the newest accelerometer data and its id
-
Getting all the accelerometer data starting with the next newest row of data after the id sent, all the way up to the newest data along with the newest accelerometer data’s id.
💡 What is REST? REST[1] stands for REpresentational
State Transfer. If this acronym isn’t confusing enough,
we have many alternative names for it such as RESTful service, web API, or web service. At the great risk of giving an oversimplified definition, REST is a means of transferring data (typically defined as a noun in the URI) using HTTP verbs representing the intent of what we are doing with the data. Going forward, I will refer to this as a web service. We use web services to expose an API for accessing a CRUD object (another acronym meaning Create, Read, Update, and Delete, which is a code wrap- per for accessing our database tables).
Since all we need from our web service is to retrieve data,
we will only use the HTTP verb GET. When a client makes
an HTTP GET request to our service, we’ll return a JSON[2]
payload containing the accelerometer data, making it really
flexible for any programming language to interact with this
service. For example, I could create an application entirely in
PHP using Guzzle[3] or in JavaScript using AJAX to interact
with the web service.
Accelerometer Data Web Service Api
Table 1.
HTTP Verb CRUD Action
POST Create record
GET Read record(s)
PUT Update record
DELETE Delete record
We won’t require any authentication to keep it simple, and we won’t worry about versioning our service either. As mentioned, there is just one endpoint, and we will create it at the following location to interact with our web service:
http://raspberrypi.local/accelerometerservice/
The first time we call our service, we want to get the latest accelerometer data, so we will send an HTTP GET request without any parameters as shown in Table 2 row 1. We want the JSON data we get back to contain the latest accelerometer reading, its timestamp, and its ID. See Listing 1. Then we can plot this data point in the plotting application we will create using the timestamp. Remember, our accelerometer logging service is saving data every 100 milliseconds. Therefore, we will use the last``` MeasurementId as an input parameter the next time we request
Table 2.
**HTTP**
**Verb** **Endpoint**
GET http://raspberrypi.local/accelerometerservice/
GET http://raspberrypi.local/accelerometerservice/
_2_ _[JSON: https://www.json.org/json-en.html](https://www.json.org/json-en.html)_
_3_ _[Guzzle: https://docs.guzzlephp.org/en/stable/](https://docs.guzzlephp.org/en/stable/)_
Listing 1.
- “accelerationData”: {
- “accelerationMeasurements”: [
- {
- “axis”: {
- “X”: “0.199219”,
- “Y”: “-0.182617”,
- “Z”: “0.943359”
- },
- “dateTime”: “2022-03-02 15:00:46.631”
- }
- ],
- “lastMeasurementId”: “171695”
- }
|Accelerometer Data Web Service Api|Col2|
|---|---|
|Table 1.||
|HTTP Verb|CRUD Action|
|POST|Create record|
|GET|Read record(s)|
|PUT|Update record|
|DELETE|Delete record|
|||
Listing 2.
- “accelerationData”: {
- “accelerationMeasurements”: [
- {
- “axis”: {
- “X”: “0.203125”,
- “Y”: “-0.177734”,
- “Z”: “0.947266”
- },
- “dateTime”: “2022-03-02 15:00:46.746”
- },
- …
- {
- “axis”: {
- “X”: “0.200195”,
- “Y”: “-0.185547”,
- “Z”: “0.948242”
- },
- “dateTime”: “2022-03-02 15:00:56.625”
- }
- ],
- “lastMeasurementId”: “171795”
- }
data from the web service to get all the data points since the
above request.
For our next call to our service, we will send an _HTTP_
_GET request (Table 2 row 2) with the lastMeasurementId as a_
parameter to get all the data since our last request.
We want the JSON data we get back to contain all the accelerometer data since the last request and the newest reading’s
ID as shown in Listing 2.
If this request is exactly 10 seconds since the last request,
we should have 100 measurements in the JSON payload.
###### Creating the Accelerometer Data Web Service
I have created a PHP web service containing the endpoint
script and a CRUD wrapper class to retrieve the acceleration
data from the database and convert it into a JSON payload to
be sent to the requesting client.
|HTTP Verb|Endpoint|Parameter|
|---|---|---|
|GET|http://raspberrypi.local/accelerometerservice/|[none]|
|GET|http://raspberrypi.local/accelerometerservice/|lastMeasurementId|
9.
10.
Table 2. `11.`
12.
**HTTP** `13.`
**Verb** **Endpoint** **Parameter** `14.`
15.
GET http://raspberrypi.local/accelerometerservice/ [none] `16.`
17.
GET http://raspberrypi.local/accelerometerservice/ lastMeasurementId
18.
19.
-----
_Download the Accelerometer Service Scripts_
In the terminal window, within your `Downloads folder,`
download the `accelerometerservice.zip[4] file to your Pi`
using wget:
cd ~/Downloads/ wget https://www.phparch.com/downloads/accelerometerservice.zip
In the terminal window, unzip the `accelerometerservice.`
zip file and move the accelerometerservice folder to the /var/ www/html/ folder: unzip accelerometerservice.zip sudo mv accelerometerservice/ /var/www/html/
The web service contains the scripts shown in Table 3.
And our web service endpoint is now located at: `http://`
raspberrypi.local/accelerometerservice/
_accelerometerdata.php_
First, let’s take a look at AccelerometerData.php in Listing 3.
This class is for fetching data from a database table into
an object. You can see that the properties in this class match
the fields in the accelerometer_data table in the Accelerom**eterData database. Each object of this class will represent a**
row from a query into the table. I’ve added magic getters and
setters and a __toString() method for completeness, but we’ll
just be using the magic getter.
_accelerometerdatamanager.php_
Listing 4 is our CRUD wrapper the `AcelerometerDataMan-`
ager class. This class has two public methods:
Table 3.
**File Name**
AccelerometerData.php
AccelerometerDataManager.php
index.php
_4 accelerometerservice.zip:_
_[https://www.phparch.com/downloads/accelerometerservice.zip](https://www.phparch.com/downloads/accelerometerservice.zip)_
Listing 3.
- class AccelerometerData
- {
- private $id;
- private $created;
- private $axis_x;
- private $axis_y;
- private $axis_z;
- // Get/Set
- public function __get($ivar)
- {
- return $this->$ivar;
- }
- public function __set($ivar, $value)
- {
- $this->$ivar = $value;
- }
- // Serialize
- public function __toString()
- {
- $format =
- “
Id: %s
Created: ” - . “%s
X: %s
Y: ” - . “%s
Z: %s
”; - return sprintf(
- $format,
- $this->__get(‘id’),
- $this->__get(‘created’),
- $this->__get(‘axis_x’),
- $this->__get(‘axis_y’),
- $this->__get(‘axis_z’)
-
);
- }
- }
|Table 3.|Col2|
|---|---|
|File Name|Description|
|AccelerometerData.php|Class wrapper for accelerometer_data table data|
|AccelerometerDataManager.php|CRUD manager wrapper for retrieving data from the aceler- ometer_data table and packaging it up into a JSON payload|
|index.php|Web service entrypoint|
|||
- `readLatest() retrieves the newest accelerometer data,`
including its id
-----
Listing 4.
- require_once(‘AccelerometerData.php’);
- class AccelerometerDataManager
- {
- const HOST = “localhost”;
- const DB = “AccelerometerData”;
- const USER = “accelerometer”;
- const PW = “accelerometer”;
- public function readLatest()
- { … }
- public function readFromIdToLatest($id)
- { … }
- private function
- getJsonEncodedDataFromAccelDataObjects(
- $accelDataObjects
- )
- { … }
- }
- `readFromIdToLatest($id) retrieves all the accelerometer`
data since the id (passed in as a parameter) and the
newest id. If the id is not found in the table (due to it
being deleted because it’s older than 60 seconds), return
all 60 rows of the accelerometer data.
The class also has one private utility method:
- `getJsonEncodedDataFromAccelDataObjects($accelData-`
Objects) that we will use to create a nested associative
array from the accelerometer data and convert it into a
_JSON payload._
Let’s take a look at the getJsonEncodedDataFromAccelDataOb```
jects() (Listing 5) method first.
The parameter,$accelDataObjects, is an array of one or more AccelerometerData objects fetched from the accelerometer_data table. The foreach() loop builds a nested associative array (called
$measurements) containing the accelerometer data from the
array of AccelerometerData objects.
After processing all of the accelerometer data, we will
continue to build the nested associative array by creating
an array called $accelerationData with a key of
accelera-
tionData. The key contains another array with two key/value
Listing 5.
1. private function
getJsonEncodedDataFromAccelDataObjects($accelDataObjects)
2. {
3. $measurements = array();
4.
5. foreach ($accelDataObjects as $accel)
6. {
7. $measurements[] =
8. array(
9. 'axis' =>
10. array(
11. 'X' => $accel->axis_x,
12. 'Y' => $accel->axis_y,
13. 'Z' => $accel->axis_z
14. ),
15. 'dateTime' => $accel->created
16. );
17. }
18.
19. $accelerationData =
20. array(
21. 'accelerationData' =>
22. array(
23. 'accelerationMeasurements' => $measurements,
24. 'lastMeasurementId' => $accel->id
25. )
26. );
27.
28. return json_encode($accelerationData, JSON_PRETTY_PRINT);
29. }
pairs: accelerationMeasurements containing the
$measure-
ments array as a value, and lastMeasurementId containing the
value of the last ID processed from the AccelerometerData
object array(which will be the newest reading).
Finally, we call json_encode() to convert the completed
associative array $accelerationData into a pretty printed
JSON payload and return it to the calling method.
Let’s take a look at the readLatest() method in Listing
6, which will return the newest accelerometer data reading
along with its primary key ID.
Using PDO, we create a connection to the Accelerome-
terData database and we will handle exceptions. Next, we
will query the newest row of data in the accelerometer_data
table and fetch the results into an array of AccelerometerData
objects — there should only be one object in the array. If
something goes wrong with the query, we will return 0 to the
calling function. We pass this AccelerometerData object array
to the getJsonEncodedDataFromAccelDataObjects() method
and return the JSON payload to the calling function.
Finally, let’s look at the readFromIdToLatest($id) method in
Listing 7 which will return all the accelerometer data readings
since the reading specified by the $id parameter.
This method takes an ID corresponding to a row in the
accelerometer_data table. Using PDO, we create a connec-
tion to the AccelerometerData database, and we will handle
exceptions. Next, we will query all the newest rows of data
in the accelerometer_data table greater than the $id passed
in and fetch the results into an array of AccelerometerData
objects. Notice we are preparing and binding our incoming
$id to :id to prevent SQL injection. If something goes wrong
with the query, we will call the readLatest() method to return
the newest accelerometer data reading. Finally, we pass this
AccelerometerData object array to the getJsonEncodedDataFromAccelDataObjects() method and return the JSON payload to
the calling function.
index.php
index.php (Listing 8) serves as the entry to our web service
endpoint. First we grab the HTTP REQUEST method from
$_SERVER['REQUEST_METHOD']. Next, we instantiate an Acceler
ometerDataManager object. Then we will
switch on $httpVerb.
We will only process HTTP GET requests and throw an
Unsupported HTTP request exception for anything else.
Listing 7.
1. public function readFromIdToLatest($id)
2. {
3. $retVal = NULL;
4.
5. $db = new PDO("mysql:host=" . self::HOST . ";dbname="
6. . self::DB, self::USER, self::PW);
7. $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
8.
9. // Get the 'created' date from the given id
10. $sql = "SELECT id, created, axis_x, axis_y, axis_z "
11. . "FROM accelerometer_data WHERE id > :id "
12. . "ORDER BY id";
13.
14. try
15. {
16. $query = $db->prepare($sql);
17. $query->bindParam(":id", $id, PDO::PARAM_INT);
18. $query->execute();
19.
20. $results = $query->fetchAll(
21. PDO::FETCH_CLASS,
22. "AccelerometerData"
23. );
24.
25. if (is_array($results) && count($results) >= 1)
26. {
27. return
28. $this->getJsonEncodedDataFromAccelDataObjects($results);
29. }
30. else
31. {
32. // Just get the latest reading
33. $retVal = $this->readLatest();
34. }
35. }
36. catch(Exception $ex)
37. {
38. echo "{$ex->getMessage()}<br/>\n";
39. }
40.
41. return $retVal;
42. }
Listing 6.
1. public function readLatest()
2. {
3. $retVal = NULL;
4.
5. $db = new PDO("mysql:host=" . self::HOST . ";dbname="
6. . self::DB, self::USER, self::PW);
7. $db->setAttribute(PDO::ATTR_ERRMODE,
8. PDO::ERRMODE_EXCEPTION);
9.
10. // Get the latest measurement
11. $sql = "SELECT id, created, axis_x, axis_y, axis_z "
12. . "FROM accelerometer_data "
13. . "ORDER BY id DESC LIMIT 1";
14.
15. try
16. {
17. $query = $db->prepare($sql);
18. $query->execute();
19.
20. $results = $query->fetchAll(PDO::FETCH_CLASS,
21. "AccelerometerData");
22.
23. if (is_array($results) && count($results) == 1)
24. {
25. return $this->
26. getJsonEncodedDataFromAccelDataObjects($results);
27. }
28. else
29. {
30. $retVal = 0;
31. $retVal = json_encode($retVal, JSON_PRETTY_PRINT);
32. }
33. }
34. catch(Exception $ex)
35. {
36. echo "{$ex->getMessage()}<br/>\n";
37. }
38.
39. return $retVal;
40. }
Once we’ve determined the request method is an HTTP GET,
we will send an HTTP header
with the content type of
application/json,
so the
client’s browser knows to
expect a JSON payload. If
a query parameter named
lastMeasurementId is passed
in we will call the readFromId``` ToLatest() method passing
in the query parameter
contained in `$_GET['last-`
`MeasurementId'].` Otherwise,
we will call the `readLatest()`
method. Both of these
methods will return a _JSON_
payload which is echoed out
to the client.
###### Testing Our Web Service
We have several ways we
can test our web service to see
if we’re getting the expected
_JSON payload:_
Listing 8.
- require_once(“AccelerometerDataManager.php”);
- $httpVerb = $_SERVER[‘REQUEST_METHOD’]; // POST, GET, PUT, DELETE, …
- $accelerometerDataManager = new AccelerometerDataManager();
- switch ($httpVerb)
- {
- case “GET”:
- // Read
- header(“Content-Type: application/json”);
- if (isset($_GET[‘lastMeasurementId’])) // Read (by lastMeasurementId)
-
{
- echo $accelerometerDataManager->readFromIdToLatest($_GET[‘lastMeasurementId’]);
-
}
- else
-
{
- echo $accelerometerDataManager->readLatest();
-
}
- break;
- default:
- throw new Exception(“Unsupported HTTP request”);
- break;
- }
Figure 1.
Bring up a browser tab on your local computer and navigate to `http://raspberrypi.local/accelerometerservice/.`
You should see a JSON payload that returns the newest accelerometer data and looks something like Figure 1.
Notice the `lastMeasurementId and its value. We will use`
this as a query parameter in our next request to query all the
accelerometer data since this request.
Assuming less than 60 seconds have transpired,
bring up another tab in your browser and navigate to
http://raspberrypi.local/accelerometerservice/?lastMeasurementId=41820. You should see a JSON payload that returns
all the newest accelerometer data since the lastMeasurementId
and looks something like Figures 2 and 3.
Now, I’ll show you how to use curl to test your web service
if you have it installed as a command-line utility. Open up
a terminal window on your local computer and type the
following command:
curl —location —request
GET ‘http://raspberrypi.local/accelerometerservice’
💡 NOTE: When we request all the data
_since this last request, keep in mind that the_
accelerometerdatalogging.service is only storing
_60 seconds worth of data. So if you request all the data_
_since the lastMeasurementId and more than 60 seconds_
_have passed, you will retrieve the latest 60 seconds worth_
_of accelerometer measurement data. This web service_
-----
In Figure 4, you should see the JSON payload returned. If
you like, you can repeat this command with the query parameter added onto the end of the URL to get all the newest
measurement data since the lastMeasurementId.
###### Building a Plotting Application
Now that we know our accelerometer data web service is
working, we will build a client-side web application using
JavaScript that makes an AJAX call to the web service to get
the latest accelerometer entries every second. We will use
_Smoothie Charts[5] (a JavaScript charting library) to plot the_
accelerometer data in real time.
_Download the Accelerometer Plotting Application Scripts_
In the terminal window, within your `Downloads folder,`
download the accelerometer.zip[6] file to your Pi using wget:
cd ~/Downloads/ wget https://www.phparch.com/downloads/accelerometer.zip
In the terminal window, unzip
the accelerometer.zip file and move
the `accelerometer folder to the`
/var/www/html/ folder:
unzip accelerometer.zip sudo mv accelerometer/ /var/www/html/
The plotting application contains the following scripts
shown in Table 4.
Our accelerometer plotting application is now located at:
http://raspberrypi.local/accelerometer/.
I’m not going to go through the `smoothie.js library, but`
if you want to know more, a ten-minute tutorial[7] shows
you how easy it is to use. And a configuration tool called
_Smoothie Charts Builder[8] allows you to set various options_
and generates the HTML and JavaScript you need for these
options.
_5_ _[Smoothie Charts: http://smoothiecharts.org](http://smoothiecharts.org)_
_6_ _accelerometer.zip:_
_[https://www.phparch.com/downloads/accelerometer.zip](https://www.phparch.com/downloads/accelerometer.zip)_
_7_ _[ten-minute tutorial: http://smoothiecharts.org/tutorial.html](http://smoothiecharts.org/tutorial.html)_
_[8 Smoothie Charts Builder: http://smoothiecharts.org/builder/](http://smoothiecharts.org/builder/)_
|File Name|Description|
|---|---|
|smoothie.js|Smoothie Charts library script|
|index.html|Main page that displays the canvas containing the real-time plot of accelerometer data|
|js_get_server_name.php|PHP script for returning the name of the server to an AJAX call for proper URL construc- tion|
|acceleration.js|Contains functions for getting the latest accelerometer data from the web service and plotting it to a canvas using Smoothie Charts|
Table 4. `unzip`
sudo
**File Name** **Description**
smoothie.js Smoothie Charts library script
Main page that displays the
index.html canvas containing the real-time
plot of accelerometer data
PHP script for returning the
name of the server to an AJAX
js_get_server_name.php
call for proper URL construc-
tion
Contains functions for getting
the latest accelerometer data
acceleration.js from the web service and _5_
plotting it to a canvas using _6_
Smoothie Charts
_7_
Figure 2. JSON payload part 1
-----
However, I want to walk through the rest of the scripts with
you so you understand how our plotting application works
and integrates with our web service.
_NOTE: Smoothie Charts has many options not covered in_
_the Smoothie Charts Builder. You will need to peruse the_
_well-documented smoothie.js library to find them._
_index.html_
I will again use the Bootstrap[9] CSS framework for our
_Accelerometer Data page. See Listing Listing 9._
We’re starting out with a typical boilerplate Bootstrap web
page. The things I want you to note are we need a canvas:
Our canvas will be a fixed height of 100 pixels and responsive to the width of the canvas.
Next, we need to include JQuery because we will be making
some AJAX calls to use our web service:
Then we need to include our `smoothie.js library and our`
acceleration.js script containing our functions:
And lastly we need to call our `plotAccelerometerData()`
function:
_js_get_server_name.php_
This script will return the name of the server as JSON to an
AJAX call from the acceleration.js script and will be used to
construct the URL to the web service endpoint:
Listing 9.
-
-
-
-
Accelerometer Data
-
-
-
integrity="sha384-ka7Sk0Gln4gmtz2MlQnikT1wXgYsOg+OMhuP+IlRH9sENBO0LRn5q+8nbTov4+1p"
-
crossorigin="anonymous"></script>
-
integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4="
-
crossorigin="anonymous"></script>
- plotAccelerometerData();
…
_9_ _[Bootstrap: http://getbootstrap.com](http://getbootstrap.com)_
-----
Listing 10.
- let Server =
- {
- url: ""
- }
- let AccelerometerData =
- {
- lastId: 0
- }
Listing 11.
function getLatestAccelerometerData( axis_x, axis_y, axis_z) { … } function plotAccelerometerData() { … }
Listing 12.
- function plotAccelerometerData () {
- $.getJSON(‘js_get_server_name.php’,
- function (data) {
- Server.url = ‘http://’
-
+ Server.server_name
-
+ '/accelerometerservice/';
- });
- …
- }
property to point to our web service with the endpoint URL:
http://raspberrypi.local/accelerometer/.
Next, we need to specify the plot lines for each axis of
accelerometer data. Smoothie Charts requires us to create a
TimeSeries object for each plotline: function plotAccelerometerData() { … let x = new TimeSeries(); let y = new TimeSeries(); let z = new TimeSeries(); … }
Then we will call `setInterval(), which will call the`
`getLatestAccelerometerData()` function every
one second, passing in the time series objects
for each axis. `getLatestAccelerometerData()`
will retrieve one second’s worth of accelerometer data and
append it to the time series objects. See Listing 13.
In Listing 14 we will instantiate our SmoothieCart object.
We will set our chart to be responsive, so when we resize
the width of our canvas, the chart will adjust accordingly.
We also set the `grid parameters and` `labels. You can exper-`
iment with some of these settings using the Smoothie Charts
$server_name));
```
_acceleration.js_
This script has functions for getting the latest accelerometer
data from the web service and plotting it to a canvas using
**Smoothie Charts.**
First, we will create a couple of properties. The first property is for holding the URL to our web service endpoint used
in AJAX calls to the service. The second property will hold
the ID of the last read accelerometer data. See Listing 10.
In Listing 11 we will have two functions:
```
getLatestAccelerometerData() and plotAccelerometerData().
```
Let’s look at the `plotAccelerometerData() function first, as`
it will show us how to set up and use the Smoothie Charts
library. `plotAccelerometerData() is the entry point that is`
called from index.html.
First, we need to construct a proper URL of our
web service endpoint (Listing 12), so we will use
```
jQuery.getJSON() to get our JSON encoded server name
```
using an AJAX HTTP GET request and set the `Server.url`
Listing 13.
```
function plotAccelerometerData () {
...
setInterval(function () {
getLatestAccelerometerData(x, y, z);
}, 1000);
...
}
```
Listing 14.
```
1. function plotAccelerometerData () {
2. ...
3. let smoothie = new SmoothieChart(
4. {
5. responsive: true,
6. grid:
7. {
8. strokeStyle: 'rgb(125, 0, 0)',
9. fillStyle: 'rgb(60, 0, 0)',
10. lineWidth: 1,
11. millisPerLine: 250,
12. verticalSections: 6
13. },
14. labels: {fillStyle: 'rgb(255, 255, 0)'}
15. });
16. ...
17. }
```
-----
_Builder[10] or by looking at the options in the function header_
documentation for the smoothie constructor in smoothie.js.
Then we will add the three `TimeSeries objects to our`
`SmoothieChart object and set the` `strokeStyle,` fillStyle,
and `lineWidth parameters for each axis by calling the`
```
SmoothieChart.addTimeSeries() method:
function plotAccelerometerData()
{
...
smoothie.addTimeSeries(x, {
strokeStyle: 'rgb(0, 255, 0)',
fillStyle: 'rgba(0, 255, 0, 0.3)',
lineWidth: 3 });
smoothie.addTimeSeries(y, {
strokeStyle: 'rgb(255, 0, 255)',
fillStyle: 'rgba(255, 0, 255, 0.3)',
lineWidth: 3 });
smoothie.addTimeSeries(z, {
strokeStyle: 'rgb(0, 0, 255)',
fillStyle: 'rgba(0, 0, 255, 0.3)',
lineWidth: 3 });
...
}
```
We’ll set the X-axis green, the Y-axis magenta, and the
Z-axis blue.
Finally, we will call the `SmoothieChart.streamTo() method`
to stream the accelerometer data to the canvas we identified
with the element ID `acceleration_data_plot_canvas with a`
delay of one second to avoid jitter in the display:
10 Smoothie Charts Builder:
[http://smoothiecharts.org/builder/](http://smoothiecharts.org/builder/)
```
function plotAccelerometerData()
{
...
smoothie.streamTo(document.getElementById(
"acceleration_data_plot_canvas"), 1000);
...
}
```
Listing 15 shows the `getLatestAccelerometerData() func-`
tion, which is responsible for getting the accelerometer data
from the web service and adding it to the Smoothie Charts
```
TimeSeries objects.
```
```
getLatestAccelerometerData will be called once per second.
```
The first time this function is called, we enter the if condition
because we initialized AccelerometerData.lastId to 0. In this
condition, we want to get the newest accelerometer data. All
subsequent calls to this function will enter the else condition
because we will be setting `AccelerometerData.lastId to the`
newest accelerometer data ID.
turnoff.us | Daniel Stori
Shared with permission from the artist
Listing 15.
```
function getLatestAccelerometerData(axis_x, axis_y, axis_z)
{
if (AccelerometerData.lastId === 0)
{ ... }
else // Send GET using last measured Id
{ ... }
}
```
-----
Listing 16.
```
1. function getLatestAccelerometerData (axis_x, axis_y, axis_z) {
2. if (AccelerometerData.lastId === 0) {
3. $.getJSON(Server.url,
4. {},
5. function (data, status) {
6. if (status === 'success') {
7. AccelerometerData.lastId = data.accelerationData.lastMeasurementId;
8.
9. // In order to be compatible with the Safari browser, we cannot just pass in
10. // the datetime string into the Date() constructor. Instead, we much pass in
11. // each datetime component as a parameter to the Date() constructor
12. let dateComps = data.accelerationData.accelerationMeasurements[0].dateTime.split(/[^0-9]/);
13. let dateTime = new Date(dateComps[0], dateComps[1] - 1, dateComps[2],
14. dateComps[3], dateComps[4], dateComps[5], dateComps[6]).getTime();
15.
16. let xValue = data.accelerationData.accelerationMeasurements[0].axis.X;
17. let yValue = data.accelerationData.accelerationMeasurements[0].axis.Y;
18. let zValue = data.accelerationData.accelerationMeasurements[0].axis.Z;
19.
20. axis_x.append(dateTime, xValue);
21. axis_y.append(dateTime, yValue);
22. axis_z.append(dateTime, zValue);
23. }
24. });
25. } else // Send GET using last measured Id
26. { ... }
27. }
```
Let’s look at the if condition logic shown in Listing 16.
First, we request the latest accelerometer data from our
web service endpoint by calling `jQuery.getJSON() to get`
JSON data using an AJAX HTTP GET request. This request
to `http://raspberrypi.local/accelerometerservice/ has no`
query parameters. Upon success, the _JSON payload will be_
contained in the data parameter of the anonymous function.
At this point we’ll assign the AccelerometerData.lastId property to the value of the last measure ID:
```
AccelerometerData.lastId =
data.accelerationData.lastMeasurementId;
```
Since we are only reading one set of measurements all the
data we need is contained in data.accelerationData.acceler```
ationMeasurements[0].
```
The TimeSeries objects passed in require a Date object specifying the datetime of the data we want to append to the series.
The Date object can be instantiated with the datetime string,
however, in order to be compatible with multiple browsers, it
is better to instantiate a Date object with the datetime string
split into individual components as parameters to the `Date`
constructor:
Next we will save the measurement data to local variables:
```
let xValue =
data.accelerationData.accelerationMeasurements[0].axis.X;
let yValue =
data.accelerationData.accelerationMeasurements[0].axis.Y;
let zValue =
data.accelerationData.accelerationMeasurements[0].axis.Z;
```
Finally we will append the `dateTime` `Date object and the`
measurements to each TimeSeries object:
```
axis_x.append(dateTime, xValue);
axis_y.append(dateTime, yValue);
axis_z.append(dateTime, zValue);
```
```
// In order to be compatible with the Safari browser, we cannot just pass in
// the datetime string into the Date() constructor. Instead, we must pass in
// each datetime component as a parameter to the Date() constructor
let dateComps = data.accelerationData.accelerationMeasurements[0].dateTime.split(/[^0-9]/);
let dateTime = new Date(dateComps[0], dateComps[1] - 1, dateComps[2],
dateComps[3], dateComps[4], dateComps[5], dateComps[6]).getTime();
```
-----
Listing 17.
```
1. function getLatestAccelerometerData (axis_x, axis_y, axis_z) {
2. if (AccelerometerData.lastId === 0)
3. { ... }
4. else // Send GET using last measured Id
5. {
6. $.getJSON(Server.url,
7. {lastMeasurementId: AccelerometerData.lastId},
8. function (data, status) {
9. if (status === 'success') {
10. AccelerometerData.lastId = data.accelerationData.lastMeasurementId;
11.
12. data.accelerationData.accelerationMeasurements.forEach(function (accelerationMeasurement) {
13. // In order to be compatible with the Safari browser, we cannot just pass in
14. // the datetime string into the Date() constructor. Instead, we much pass in
15. // each datetime component as a parameter to the Date() constructor
16. let dateComps = accelerationMeasurement.dateTime.split(/[^0-9]/);
17. let dateTime = new Date(dateComps[0], dateComps[1] - 1, dateComps[2],
18. dateComps[3], dateComps[4], dateComps[5], dateComps[6]).getTime();
19.
20. let xValue = accelerationMeasurement.axis.X;
21. let yValue = accelerationMeasurement.axis.Y;
22. let zValue = accelerationMeasurement.axis.Z;
23.
24. axis_x.append(dateTime, xValue);
25. axis_y.append(dateTime, yValue);
26. axis_z.append(dateTime, zValue);
27. });
28. }
29. });
30. }
31. }
```
Now in Listing 17 we will look at the else condition logic.
First, we request all the newest accelerometer data since the last measurement’s ID from our web service endpoint by calling
```
jQuery.getJSON() to get JSON data using an AJAX HTTP GET request and including the AccelerometerData.lastId property
```
set as the query parameter lastMeasurementId:
```
http://raspberrypi.local/accelerometerservice/?lastMeasurementId=41877
```
Upon success, the JSON payload will be contained in the data parameter of the anonymous function. At this point, we’ll
assign the AccelerometerData.lastId property to the value of thelastMeasurementId:
```
AccelerometerData.lastId = data.accelerationData.lastMeasurementId;
```
This block of code is essentially the same as that contained in the if condition, however we are reading (approximately) 10
measurements instead of just one. So we will iterate over data.accelerationData.accelerationMeasurements using the forEach()
method:
```
data.accelerationData.accelerationMeasurements.forEach(function(accelerationMeasurement)
{
...
});
```
-----
Figure 5. Accelerometer Data Plot
The rest of the code block (Listing 18) is essentially the
same as the `if block, except we are running it for each`
measurement (set as `accelerationMeasurement) in the` `data.`
```
accelerationData.accelerationMeasurements array/
###### Testing Our Plotting Application
```
Bring up a browser tab on your local computer and navigate
to `http://raspberrypi.local/accelerometer/. Gently shake`
your accelerometer sensor and change its orientation—you
should see a real-time plot of all three axes of your accelerometer data displayed on the web page similar to Figure 5.
###### Conclusion
You now have a web application on your Raspberry Pi that
displays a real-time plot of the activity your accelerometer is
reading. If you have a clothes dryer, go ahead and put your Pi
on top of it the next time dry a load of clothes, and take a look
at the activity! See Figure 6.
In the next installment, we will:
- Install a State Machine that detects when our accelerometer stops shaking that we can use to detect when our
clothes dryer is done
- and send a text message to ourselves using sendmail
when our drying cycle is complete.
Figure 6. Data Plot of Dryer Running
_Ken Marks has been working in his dream_
_job as a Programming Instructor at Madison_
_College in Madison, Wisconsin, teaching_
_PHP web development using MySQL_
_since 2012. Prior to teaching, Ken worked_
_as a software engineer for more than 20_
_years, mainly developing medical device_
_software. Ken is actively involved in the_
_PHP community, speaking and teaching at_
_[conferences. @FlibertiGiblets](https://twitter.com/FlibertiGiblets)_
-----
###### November 2021 Volume 20 - Issue 11
-----
## Which License to Choose?
###### Chris Tankersley
Licensing for software, whether it is open source or not, is an integral part of releasing software. The commercialization of software has made it necessary for developers to be explicit in how users or other developers consume their software. Unfortunately, the topic of licensing is not as straightforward as many developers would like it to be.
As a quick refresher, licenses are the legal terms that an
end-user must abide by to use the software legally. This
includes installing, using, or integrating the software in
other pieces of software. Common things the license covers
are personal versus commercial use, how and where software may be installed, and whether or not the end-user may
modify the software.
There is much more to licensing than just “is this software
open source or not?” If you are releasing software, you can
choose from hundreds of available licenses. Why are there so
many, and what are the differences between them? What you
choose affects how users can interact with your software.
As developers, we need to be keenly aware of the licenses
that we use in our own software. Licenses differ in the liberties granted to a developer. And different licenses can have
very serious legal repercussions for a company. Have you
ever worked for a company with a strict “No GPL[1] Software”
policy? There is a very real reason for that.
While software can be used anywhere in the world, my experience and view for this article will be United States-focused. I
am not a lawyer. If you are unsure how something may work
in your country, I would consult with a lawyer familiar with
your local copyright laws. Consider the following descriptions a view on the intentions of the various licenses rather
than hard legal advice.
###### No License
Have you ever come across a repository that you would
love to use, but there is no license file? Congratulations, you
have come across a library or project where you legally have
no usage rights. You should avoid this project at all costs.
A core component of any license is the rights and rules
around using a particular piece of software. If there is no
license, the copyright holder holds all the cards. They have
not granted you any usage rights, and they have not granted
you access to the source code, even if you are looking at the
code on GitHub or another service. They may come after you
legally for using their intellectual property without permission.
While I would love to be altruistic about this, this is the
world we live in, thanks to modern copyright laws. No license
means no rights, even basic usage rights. The project is just
_1_ _[GPL: https://www.gnu.org/licenses/gpl-3.0.en.html](https://www.gnu.org/licenses/gpl-3.0.en.html)_
taking advantage of lax policy policing by code repository
providers.
###### Public Domain
The Public Domain[2] is not so much a license in-and-of-itself, but rather a declaration of the abandonment of copyright
on a work. In the United States, this declaration is known as
“dedicating.” A work dedicated to the Public Domain is usually
identified with some accompanying text of “This work is
dedicated to the public domain.” Once a work is dedicated to
the public domain, the work is no longer owned by any entity
and free for anyone to use.
The Public Domain is incredibly problematic from a legal
standpoint. The first problem is that, like licensing, dedication to the Public Domain must be declared. Many countries,
including the United States, assign copyright automatically to
the author. There is no legal authority one is required to go
through to establish copyright (though there are legal steps
one can do to help protect and declare the copyright).
Nothing is automatically entered into the Public Domain
except under a few conditions. A work enters the Public
Domains by being dedicated by the original author as
mentioned above, or if copyright expires. Copyright can
expire either naturally or if the author does not file for an
extension. Once copyright is removed, a work enters the
Public Domain.
While this sounds straightforward, copyright is a complicated thing. The rules differ for people versus corporate
entities. Copyright is transferrable, and if this is not meticulously documented, copyright ownership can get cloudy.
Authors of works may or may not have done so “for hire”
(how most software is developed). Corporate acquisitions
and breakups can make it hard to know who owns what
copyright. Shifting rules of the length of copyright can make
it hard to know if a work is due to automatically age out to
Public Domain status.
Assuming copyright ownership is actually known, there
is no legal definition of what “Public Domain” is. There are
rules for copyright as part of the Berne Convention of 1988,
but countries can still enact their own rules. For example, The
US does not require dedication for works before 1988 and
_2_ _The Public Domain:_
_[https://fairuse.stanford.edu/overview/public-domain/welcome](https://fairuse.stanford.edu/overview/public-domain/welcome)_
-----
declared anything before 1927 as Public Domain (with exceptions). Copyright is also handled country-by-country so
that a work could be Public Domain in one country but not
another. This presents a second hurdle for anyone wanting to
use Public Domain software.
Even scarier? The copyright holder could potentially
rescind Public Domain status[3] with the way US copyright
works. Now, all of a sudden, a library your company uses is
no longer Public Domain. What do you do? Since there was
no license agreement, it is unknown, both theoretically and
legally, what would happen if such a thing were to occur.
At the end of the day, much like with no license, Public
Domain licensed software is a minefield and should be
avoided.
###### Public Domain-like License
What happens when you want the legal requirements of a
license but the freedom of Public Domain? You get a variety
of Public Domain-like licenses that effectively say, “I do not
care what you do with this software, and I am putting that
in writing as the copyright holder.” There are a handful of
licenses available that fall under this category, but not all of
these licenses are considered “Open Source.”
The Creative Commons is probably the most popular suite
of licenses from this type of software license. The Creative
Commons is designed as a variety of licenses for authors to
use to better control how their works are used. However, it
is not recommended they be used for software[4]. Creative
Commons 0[5] is the closest to a Public Domain declaration
you can get while still providing legal text.
Another popular license is the Unlicense[6], which includes
anti-copyright language to make it more applicable around
the world. It includes an official copyright waiver as well as
a “no warranty” statement, which is an important declaration missing from many of the Public Domain-like licenses.
In fact, the Unlicense is considered open-source compatible,
unlike other licenses in this category.
While there are a handful of Public Domain-like licenses,
many are legally dubious. One of the more famous examples
is the “Don’t be a Dick[7]” license. On the surface, it looks like
something akin to the Unlicense, but the main focus of the
license is you are granted rights if you “aren’t a dick.” Unfortunately, there is no legal definition for what this means, and
the license even mentions that the few examples given are not
exhaustive. At any point, the copyright holder can just decide
a user has broken the license based on whatever behavior the
author deems is “dickish.”
_3_ _[rescind Public Domain status: https://www.techdirt.com/?p=83508](https://www.techdirt.com/?p=83508)_
_4_ _[software: https://phpa.me/creativecommons-can-i](https://phpa.me/creativecommons-can-i)_
_5_ _[Creative Commons 0: https://creativecommons.org/?p=12354](https://creativecommons.org/?p=12354)_
_6_ _[Unlicense: https://en.wikipedia.org/wiki/Unlicense](https://en.wikipedia.org/wiki/Unlicense)_
_7_ _[Don’t be a Dick: https://dbad-license.org](https://dbad-license.org)_
It is my opinion that short of a Public Domain-like license
approved by the Open Source Initiative, you should avoid
these types of licenses.
###### Proprietary License
A proprietary license is essentially going to be any license
that is unique to an individual piece of software. For example,
when you install a piece of software like Microsoft Office, you
agree to what they call an “End User License Agreement.” This
agreement contains information that you might expect—it
details how you may install the software, how usage is granted,
and all kinds of other legal stuff.
While a company may copy-and-paste much of the text
between their EULAs, each software’s license governs just
that piece of software. The Windows EULA does not cover
Microsoft Office, and the Windows 10 EULA does not cover
Windows 11 or previous versions of the software. The license
is unique to that specific piece of software.
_Are EULAs and Software Licenses different? It depends on_
_who you ask, as they can cover many of the same topics._
_For what topics we are covering, it is pretty safe to equate_
_an End User License Agreement to various other Software_
_Licenses. One main difference is EULAs tend to not differ-_
_entiate between source code and compiled code, where_
_open-source licenses may be explicit on those two topics._
###### Source Sharing
While many proprietary licenses restrict gaining access to
the source code of a particular piece of software, that is not
a defining factor of a proprietary license. Some software may
come with an option called Source Sharing, where a software
provider allows a customer access to the source code.
I have come across this mostly in enterprise software
where the software provider sells software that solves many
common problems for their customers, but customers may
have very unique workflows or requirements. I used to work
in the insurance industry, and all of our back-office software
came with a source-sharing agreement. Each release of the
software included a full copy of the source code we would
patch with our custom changes and compile on our systems.
We would modify the software to work exactly how we
needed it and work with the vendor and other customers
to get our patches merged into the mainline code by the
vendor. Customers were allowed to share their patches with
the system among themselves. It was very much like a limited
open-source ecosystem.
Where source sharing differs from Open Source licensing,
the original vendor still controls the source. While we
were granted access to the software, we could only share it
with other customers. We were still required to pay a hefty
licensing fee to get access to the software. Most damning of
all, we found out after doing a license audit that the changes
we made had an automatic copyright transfer to the vendor.
-----
What that meant was that the original vendor could, at any
time, take our patches and sell them as their own. They could
even take our patches and lock them behind an even more
expensive license. While we had access to the source code,
the proprietary license dictated that we did not own any of
the code, even the code we wrote.
###### Open Source Licensing
Many people do not realize that despite open-source software arguably being the original way software was distributed,
the term “Open Source Software” was not codified until 1998.
This official definition is called “The Open Source Definition[8]”
and was published by the Open Source Initiative as a copy
of the Debian Free Software Guidelines[9]. These rules specify
what makes a piece of software Open Source.
A collective known as the Open Source Initiative[10] is a
group that helps govern and guide open-source software. This
includes maintaining the “Open Source Definition” as well as
providing various resources for open source projects. One of
their most important projects is a list of open-source licenses
that they consider compatible with the idea of Open Source.
For a piece of software to be considered open source, it
must meet the following guidelines[11] from Wikipedia:
1. **Free redistribution: The license shall not restrict**
any party from selling or giving away the software as
a component of an aggregate software distribution
containing programs from several different sources.
The license shall not require a royalty or other fee for
such sale.
2. **Source code: The program must include source code,**
and must allow distribution in source code as well
as compiled form. Where some form of a product
is not distributed with source code, there must be a
well-publicized means of obtaining the source code
for no more than a reasonable reproduction cost
preferably, downloading via the Internet without
charge. The source code must be the preferred form
in which a programmer would modify the program.
Deliberately obfuscated source code is not allowed.
Intermediate forms such as the output of a preprocessor or translator are not allowed.
3. **Derived works: The license must allow modifica-**
tions and derived works, and must allow them to be
distributed under the same terms as the license of the
original software.
4. **Integrity of the author’s source code: The license**
may restrict source-code from being distributed
in modified form only if the license allows the
_8 The Open Source Definition:_
_[https://en.wikipedia.org/wiki/The_Open_Source_Definition](https://en.wikipedia.org/wiki/The_Open_Source_Definition)_
_9_ _[Debian Free Software Guidelines: https://w.wiki/4z9Q](https://w.wiki/4z9Q)_
_[10 Open Source Initiative: https://opensource.org](https://opensource.org)_
_[11 guidelines: https://w.wiki/4z9P](https://w.wiki/4z9P)_
distribution of “patch files” with the source code for
the purpose of modifying the program at build time.
The license must explicitly permit distribution of
software built from modified source code. The license
may require derived works to carry a different name
or version number from the original software.
5. **No discrimination against persons or groups: The**
license must not discriminate against any person or
group of persons.
6. **No discrimination against fields of endeavor: The**
license must not restrict anyone from making use of
the program in a specific field of endeavor. For example, it may not restrict the program from being used
in a business, or from being used for genetic research.
7. **Distribution of license: The rights attached to the**
program must apply to all to whom the program is
redistributed without the need for execution of an
additional license by those parties.
8. **License must not be specific to a product: The**
rights attached to the program must not depend
on the program’s being part of a particular software
distribution. If the program is extracted from that
distribution and used or distributed within the terms
of the program’s license, all parties to whom the
program is redistributed should have the same rights
as those that are granted in conjunction with the
original software distribution.
9. **License must not restrict other software: The license**
must not place restrictions on other software that
is distributed along with the licensed software. For
example, the license must not insist that all other
programs distributed on the same medium must be
open-source software.
10. License must be technology-neutral: No provision
of the license may be predicated on any individual
technology or style of interface.
###### Smelly Open Source
Much like Public Domain-like licensing, many licenses
look like open-source licenses but actually restrict what the
user can do. The JSON License[12] is a perfect example with its
famous line, “The Software shall be used for Good, not Evil.”
What is legally Good, and what is legally Evil? Even outside
of the legal definitions, an open-source license should not,
and cannot, restrict the user in such arbitrary ways. A truly
open source software license does not restrict what the user
can do nor force the user to do specific things. Rule #6 of
the Open Source Definition expressly invalidates anything
licensed under licenses like the JSON license.
_[12 The JSON License: https://www.json.org/license.html](https://www.json.org/license.html)_
-----
###### Permissive Licenses
Permissive licenses are licenses that are not copyleft licenses
(more about them in a moment) and allow proprietary derivative works. In many ways, this mirrors how software was
originally handled—a developer puts software out into the
world and allows anyone else to use it, even if that code gets
locked up in proprietary software. The general idea is that the
software being available makes the developer’s life easier.
A few of the most common permissive licenses are the
MIT[13], BSD[14], and Apache 2.0 licenses[15]. The MIT and BSD
licenses closely resemble each other, though there are a
variety of BSD licenses that have come out through the years.
This general format is what has inspired many of the “smelly”
open source licenses and shorter licenses.
You may see the BSD license also called the Original BSD
License, and anywhere from the 0 Clause BSD License to a 4
Clause BSD license. Over the years, various rules surrounding
licensed software have changed. For example, the 2 Clause
BSD license drops a non-endorsement requirement that the
3 Clause BSD license includes. The 3 Clause BSD dropped
an advertising requirement that the 4 Clause BSD license
imposed. These days the 2 or 3 Clause BSD license is typically
used.
Personally, I see permissive licenses working best in library
or component code or places where the code is clearly
intended to be used by other code. For example, I release my
dragonmantank/cron-expression[16] library under the MIT
license because it is meant to be used with someone else’s
code. I am more interested in solving a problem for a developer rather than making sure that the developer releases any
changes back into the wild.
The fact that permissive licenses do allow for proprietary
usage is a major downside to this type of license. One of
the most famous examples of this was that Windows 2000
contained BSD licensed code[17] as part of the networking tools
and stack. People were shocked at this, but Microsoft was well
within their rights to use the code as long as they followed the
license. And they did. If you are OK with allowing your code
to be used this way, permissive licenses are a good selection
for your code.
###### Copyleft Licenses
This brings us to Copyleft licenses. Copyleft licenses differ
from permissive licenses in that they require derivative software to be licensed under the license of the software that was
being integrated. As mentioned above, it is perfectly acceptable for permissive-licensed code to be used in software that
does not share that same license (ala BSD code being used
_[13 MIT: https://opensource.org/licenses/MIT](https://opensource.org/licenses/MIT)_
_[14 BSD: https://opensource.org/licenses/BSD-3-Clause](https://opensource.org/licenses/BSD-3-Clause)_
_[15 Apache 2.0 licenses: https://opensource.org/licenses/Apache-2.0](https://opensource.org/licenses/Apache-2.0)_
_16 dragonmantank/cron-expression:_
_[https://github.com/dragonmantank/cron-expression](https://github.com/dragonmantank/cron-expression)_
_[17 BSD licensed code: https://phpa.me/everything2-bsd-windows](https://phpa.me/everything2-bsd-windows)_
in proprietary code). On the other hand, copyleft software
makes it a requirement that the code stay under the same
license. Copyleft licenses tend to cater toward software
freedom more than developer freedom.
The GPL[18] is usually the go-to example of a copyleft license.
The GPL itself was born out of the frustration that proprietary software was causing to developers and how software
was increasingly stripping developers of the rights they used
to have. As part of this, the requirement that GPL-derived
software must also be GPL licensed was a conscious decision. This decision forced developers that altered the software
to distribute those changes when someone asked. In fact, a
derivative license called the Affero GPL[19] (AGPL) even goes
so far as to say that anyone that simply accesses the code can
ask for the source code changes.
I find that copyleft licenses, especially the GPL itself, work
best for full applications. Since applications tend to solve
much larger problems and generally involve a huge amount of
developer hours, it makes much more sense to keep ill actors
from taking the open-source software and just renaming it
and slapping a proprietary label. Imagine if the Linux kernel
was released under the MIT license. Well, we know what
happens.
Two popular operating systems are based on BSD code:
1. The macOS base operating system called Darwin[20]
2. The Orbis OS[21] that powers the Playstation 4 and
Playstation 5
While Darwin started out very open, Apple increasingly
slowed down upstream patches to the OS. While Apple is very
upfront about its use of BSD software, very few realize that
the Playstation is running a BSD-powered operating system.
Sure, both Sony and Apple are legally following what the BSD
license tells them to, but there is a vast amount of work that
neither company is required to release back to the ecosystem.
###### So What do You do?
When you go to release software, think about your goal
for users. Ask yourself this question, “should the software
be proprietary or open-source?” While I wholeheartedly
support open source and believe it is the way software should
be distributed, I use an iPad and iPhone, and I use Windows
for a lot of video gaming. I do a lot of open source work, but
the vast majority of my life has been spent making proprietary software.
If you release software as open-source, do it the proper way.
Use a license approved by the Open Source Initiative, and
make sure you follow the Open Source Definition for your
_[18 GPL: https://opensource.org/licenses/GPL-3.0](https://opensource.org/licenses/GPL-3.0)_
_[19 Affero GPL: https://opensource.org/licenses/AGPL-3.0](https://opensource.org/licenses/AGPL-3.0)_
_[20 Darwin: https://w.wiki/4wKH](https://w.wiki/4wKH)_
_[21 Orbis OS: https://www.extremetech.com/?p=159476](https://www.extremetech.com/?p=159476)_
-----
software. It is not much work, and you will find a license that
fits your software’s goals.
If you are just a developer, keep in mind what software
you use and make sure you follow the license. Pay particular
attention to what the dependencies of your dependencies
require. Comcast has a license checker[22] that you can use to
scan your composer.lock file to help suss out this information.
Understand what it means to consume software under the
different licenses.
I hope all this information helps clear up a lot of the
misconceptions and unknown pitfalls of licensing. I would
love to live in a world where we just share code and do not
have to worry about the legalities of software design, but this
is the world we live in. All I can say is help spread open-source
software, and use it responsibly.
_[22 license checker: https://github.com/Comcast/php-legal-licenses](https://github.com/Comcast/php-legal-licenses)_
_Chris Tankersley is a husband, father,_
_author, speaker, podcast host, and PHP_
_developer. Chris has worked with many_
_different frameworks and languages_
_throughout his twelve years of programming_
_but spends most of his day working in PHP_
_and Python. He is the author of Docker for_
_Developers and works with companies and_
_developers for integrating containers into_
_[their workflows. @dragonmantank](https://twitter.com/dragonmantank)_
###### Docker For Developers is designed for developers looking at Docker as a replacement for development environments like virtualization, or devops people who want to see how to take an existing application and integrate Docker into their work- flow.
This revised and expanded edition includes:
• Creating custom images
• Working with Docker Compose and Docker Machine
• Managing logs
• 12-factor applications
##### Order Your Copy
###### http://phpa.me/docker-devs
-----
## Operational Security
###### Eric Mann
It is remarkably easy to grow complacent in the digital world, but a lapse in security best practices inevitably leads to a lapse in security itself.
There was a lot of confusion in the early days of the cloud.
Our websites started on either dedicated servers or poorly-compartmentalized shared hosts. Over time, virtual servers
began to take over and better segmented unrelated workloads.
Then tools like Amazon’s AWS introduced the concept of a
“managed cloud” of services that promised to make this kind
of management easier.
For most workloads, it did. For those of us still learning
how the cloud differed from a full server, it just made things
confusing.
My first project was to migrate a particular website and
API from a full server into the cloud. Rather than AWS, we
selected an early version of Microsoft Azure. Doing so
simplified our deployment as we didn’t need to worry about
licensing, .Net versions, or manually updating the GAC[1] to
avail custom libraries to the platform.
This simplicity came at a cost.
Our application stack deployed in moments and was both
highly available and remarkably stable. It was fast, efficient,
and meant an end to 5 am wake-up calls because of literal
fires in the facility hosting our old, physical server. The biggest
downside—no email support.
On a dedicated server, I could trigger outgoing email
messages directly. Early versions of Azure did not support
SMTP directly, nor was managed transactional email a
supported service at the time. My solution: open port 25 on a
server in the rack down the hall and allow our Azure-hosted
web service to relay messages through it.
It worked wonderfully!
###### Disaster strikes
The next morning, all of our support queue metrics were
down. No one was getting messages. Customers were calling
in, irate to have not received invoices, documentation, or even
basic responses to questions. Digging in, I discovered that our
corporate IP address—and thus our entire mail server—had
been globally blacklisted.
I’d opened port 25 not just to Azure but to the entire world,
and that world was gleefully using my open relay to spam the
Internet.
As quickly as I’d opened things earlier, I locked that server
down. Instead of routing messages through a server, I dumped
them all to a file that I could pull manually. Then I spent the
_1_ _[the GAC: https://phpa.me/microsoft-gac](https://phpa.me/microsoft-gac)_
rest of the week pleading with blacklist maintainers to reinstate our IP’s reputation so we could keep working.
In the end, Microsoft did release a managed transactional
mail system, and I was able to fully integrate our application.
This empowered our team to keep working while protecting
us from accidentally exposing our internal mail system to
would-be abusers elsewhere in the world.
I believed I could use our mail server in this way for a few
days while I worked out an alternative solution. This mistaken
belief was my undoing. A later review of system logs showed
the abuse started literally minutes after I’d opened the server
to incoming network connections.
My naive sense of “we’ll be fine for a while” stood no chance
against reality.
###### Learning from mistakes
Making mistakes is natural. It’s how we learn, grow, and
develop both personally and professionally. As disastrous as
my email mistake turned out to be, it was one of the most
formative lessons of my early career. I learned never to take
_any security stance for granted, lest my complacency opens_
the door to attackers.
Since then, I’ve seen countless developers fall prey to the
same kind of naivety.
One developer was having trouble with the internal
network mappings on a server for which they were responsible. Rather than ask for help, thus airing their inability to
solve a problem, they elected to take the nuclear route. They
opened their server to traffic from any IP address in hopes
they could connect via SSH and solve the problem. The
further mistake was opening all ports to all IP addresses.
An angry call from an ISP demanding to know why the
server was participating in a DDoS attack later, and we had
the server again locked down. We conducted a rather embarrassing post-mortem of the incident, and the developer, now
understanding the ramifications of their actions, learned the
lesson and moved on.
Another developer had trouble understanding how his
network kept getting breached. His email gateway, messaging
system, and firewall all showed multiple indications of
compromise, but he couldn’t figure out why. We’d completely
wipe and reprovision systems, only to see alerts of malicious
activity later that day. It wasn’t until we got together on a
screen share that the issue was apparent.
As he logged in, we all saw his password—Passw0rd.
-----
Another engineer pointed out how insecure that password
really was. “But I have lower and upper case and a number.
Wait, I need special characters, don’t I?” He went to change
his password, and sure enough, his current password was
rated as “weak.” He entered Passw0rd! instead and, greeted by
the system’s “strong” rating, assumed everything was golden.
This breach turned out to be a great opportunity to teach
another engineer how to evaluate password strength[2], and he
eventually changed to an _actual strong password. Another_
round of system refreshes, and he hasn’t seen any indication
of compromise in ages.
###### Best practices and the ongoing quest for security
No developer can ever stop learning and say, “I know everything there is to know about software.” Our industry is always
growing, developing, and moving forward. To stop learning is
to let the industry itself pass you by.
To be successful means exercising a practice of continual
learning. You absolutely must learn new technologies, new
languages, new frameworks. You must understand the best
practices in your space today and keep your eyes open to
catch their eventual evolution in the future. Adaptability is
the name of the game and is a hard requirement for ongoing
operations.
At least any operations that have a hope of any modicum
of success.
_2_ _[password strength: https://phpa.me/password-strength](https://phpa.me/password-strength)_
###### Operational Security
Security in tech is no different from any other discipline.
Things deemed secure today might be breached tomorrow as
the industry learns, evolves, and adapts. There is never a state
of “done” when it comes to security, so don’t fool yourself into
thinking security can be ignored. The tech world moves fast—
you need to move faster to stay ahead.
Learn what it means to secure your network:
- Implement a strong firewall and don’t open unnecessary
ports to the world
- Focus on refreshing your understanding of entropy and
password strength
- Use a password manager to eliminate human memory
as the weak chain in your authentication scheme
- Implement multi-factor authentication[3].
Security is the responsibility of everyone involved in
operations. As best practices continue to shift us towards a
world where every developer is involved in operations, we’re
moving quickly into a space where we’re _all responsible for_
operational security as well.
_Eric is a seasoned web developer experi-_
_enced with multiple languages and platforms._
_He’s been working with PHP for more than_
_a decade and focuses his time on helping_
_developers get started and learn new skills_
_with their tech of choice. You can reach out_
_[to him directly via Twitter: @EricMann](https://twitter.com/EricMann)_
_3_ _[authentication: https://phpa.me/multi-factor-authentication](https://phpa.me/multi-factor-authentication)_
###### Secure your applications against vulnerabilities exploited by attackers.
Security is an ongoing process not something to add right before your app launches. In this book, you’ll learn how to write secure PHP applications from first principles. Why wait until your site is attacked or your data is breached? Prevent your exposure by being aware of the ways a malicious user might hijack your web site or API.
##### Order Your Copy
###### http://phpa.me/security-principles
-----
## Accept Testing with Codeception
###### Joe Ferguson
Acceptance testing is my favorite tool to reach for when working with legacy applications that may have low test quality or no tests at all. Because acceptance testing approaches the application from outside of the source code, we’re able to greatly increase test coverage without having to touch the application’s code itself. Larger teams can use acceptance tests to prove new features behave as expected. For the single developer that knows they should be writing tests but still doesn’t for whatever reason: acceptance testing can help them jump-start their application’s test suite.
###### WHAT IS ACCEPTANCE TESTING
One of the most common test scenarios might be, “Can a
user log in to our application and land on a specific URL we
expect?” This scenario covers _many methods in our appli-_
cation, giving us more test coverage at the expense of our
test failures being more difficult to debug because so many
methods are involved. If we find ourselves building a new
feature, we can leverage acceptance testing to prove our
feature works as expected, and we can also test our failure
scenarios to prove our validation logic and form processing
are what we expect.
Acceptance testing is a method of verifying our application behaves exactly as expected, often utilizing a web
browser. We will write test scenarios that will be acted out
by a user interacting with our application, such as logging
in or performing a specific task. Acceptance tests are slower
than unit tests because they rely on a web browser. What we
lose in the speed of running acceptance tests we gain the
confidence that our application works as designed with a
real web browser. For small teams, acceptance testing can be
your first step into a formal quality assurance (QA) process.
Whereas unit tests focus on testing isolated methods
and feature tests focus on testing specific methods working
together, acceptance testing exercises our application just as
real users would by clicking on links, filling out forms, and
processing data. We can create users easily, set permissions,
and then verify those permissions are enforced. This level of
detailed step-by-step testing provides incredible confidence
the application behaves as expected. Acceptance testing is
not a replacement for unit tests. Even in situations of legacy
applications where acceptance tests are the only tests, we
still write unit tests for new functionality.
How much of our tests should be unit or acceptance? Do
we need 100% coverage? It depends. Test coverage is about
confidence in your code. Does 100% unit test coverage give
you confidence? What gives me confidence is knowing my
applications behave as I expect them to. Acceptance testing
also allows me to turn user stories into test scenarios and
easily translate business requirements into acceptance tests.
When our tests exercise the application in the exact same
way real users will, the result is confidence in our test suite.
Acceptance testing isn’t the solution for fast test feedback
and should not replace unit tests. Acceptance tests should
augment your existing test suites and add a layer of confidence that your users can interact with your application
exactly as you’ve designed and tested.
Why Codeception[1] and not something like Behat[2] or
Kahlan[3] Behat and other Behavior Driven Development
(BDD) style testing frameworks are designed to test the
application’s behaviors. Read about Kahlan in the September
2018[4] issue. With Codeception’s acceptance test suite, we can
run scenarios with real users in real browsers. Codeception
has been around the PHP ecosystem for 10 years or more,
and the project has matured into a robust solution for fullstack testing. Codeception supports functional, unit, and
API test suites out of the box, making it a perfect choice for
new applications and legacy applications that may not have
a test suite at all.
###### Getting Real
How we wire up this real web browser with Codeception
is via Selenium[5]. Selenium is a suite of tools for automating
web browsers, a very wide and open-ended topic. We will
skip most of the configuration and complexity that often
comes with Selenium by utilizing selenium-standalone[6]
from NPM.
_1_ _[Codeception: https://codeception.com](https://codeception.com)_
_2_ _[Behat: https://docs.behat.org/en/latest/%3E](https://docs.behat.org/en/latest/%3E)_
_3_ _[Kahlan: https://github.com/kahlan/kahlan](https://github.com/kahlan/kahlan)_
_4_ _September 2018:_
_[https://www.phparch.com/magazine/2018/09/magniphpicent-7-3/](https://www.phparch.com/magazine/2018/09/magniphpicent-7-3/)_
_5_ _[Selenium: https://www.selenium.dev/downloads/](https://www.selenium.dev/downloads/)_
_6_ _selenium-standalone:_
_[https://www.npmjs.com/package/selenium-standalone](https://www.npmjs.com/package/selenium-standalone)_
-----
Figure 1.
There are multiple ways you can install selenium-standalone:
1. as a global NPM package—npm install seleni```
um-standalone -g
```
2. save the package to your project—npm install sele```
nium-standalone --save-dev
```
3. run via docker—docker run -it -p 4444:4444
```
webdriverio/selenium-standalone
```
If you have installed the NPM package the next step is
to run `sudo selenium-standalone install to download the`
Chromedriver and Selenium packages. By default, we’ll
get the Selenium, Chrome, Firefox, and Chromiumedge
drivers installed for us to use. To start the service, we’ll use
```
selenium-standalone start, which will listen on port 4444 by
```
default. See Figure 1.
Since we’re using real browsers to test our application, I felt
it was fitting to pull a _real PHP project: Snipe-IT. Snipe-IT_
is an open-source IT asset/license management system as
shown in Figure 2. Think of large companies with thousands
of laptops, desktops, monitors, Windows licenses, and more
###### Accept Testing with Codeception
deployed to their workforce. Snipe-IT is a tool they can use to
help manage all of that headache. The project was so successful
snipeyhead[7] founded a company[8] to provide a hosted version
and offer support to customers. Over the years, I’ve been able
to contribute to the project, and it’s an incredible example of
a modern Laravel based application. I’m currently using PHP
8.0 and working off the develop branch due to a new major
version release (6.0!) coming in the near future. My development environment is Ubuntu 20.04 LTS; however, these
commands should translate to macOS directly.
I have my Snipe-IT running at http://snipe-it.test[9], and
once upon a time, the project featured Codeception. But if
you’re just getting started with Codeception, you can use
```
composer require "codeception/codeception" --dev to install
```
the package. Make sure you review the fantastic documentation[10] for all the configuration options.
_7_ _[snipeyhead: https://twitter.com/snipeyhead](https://twitter.com/snipeyhead)_
_[8 company: https://snipeitapp.com/company](https://snipeitapp.com/company)_
_9_ _[http://snipe-it.test: http://snipe-it.test](http://snipe-it.test)_
_[10 documentation: https://codeception.com/docs](https://codeception.com/docs)_
Figure 2.
-----
###### ccept est g t Codecept o
Because Codeception has previously been used, we’ll
review the configuration from the root of the project and
verify we’re ready to run the Acceptance test suite found in
the tests/Acceptance folder of the project. The root configuration defines where the Codeception test suites will be located
along with support and data directories, and the extension configuration can be leveraged to run only failed tests.
Listing 1 shows these configurations, which can be found in
```
codeception.yml.
```
Listing 1.
```
1. paths:
2. tests: tests
3. output: tests/_output
4. data: tests/_data
5. support: tests/_support
6. envs: tests/_envs
7. actor_suffix: Tester
8. extensions:
9. enabled:
10. - Codeception\Extension\RunFailed
```
Listing 2.
```
1. actor: AcceptanceTester
2. modules:
3. enabled:
4. - WebDriver:
5. url: One reason for this is similar to the BOM in a file. When a
script is imported using require or include and there happens
to be a space after ?>, it could trigger output in the including
file before it has properly emitted its headers, resulting in
the dreaded Headers already sent error. It might also add
unwanted whitespace during output. In the browser, that is
less likely to create a problem as the space is invisible and
doesn’t impact the index like a non-breaking space ( ). But if
a download file is generated (like a TAB file), then that could
easily introduce unexpected behavior in the consumer of that
file.
Code MUST use an indent of 4 spaces for each indent level, and MUST NOT use tabs for indenting.
This rule is a classic long-running debate among developers. Luckily, an IDE will usually allow you to set the number of spaces representing a tab, so you can just strike the TAB key easily, and the IDE inserts spaces. Mixing spaces with tabs can be handled elegantly by an IDE, leaving the user unaffected as the code looks properly formatted. But version control software, like Git, handles whitespace differently. A
space followed by a TAB in an IDE appears as if it were only a
single TAB, but version control would see the difference and consider it a change. While not impacting the functionality of the code, it can present as cognitive friction when another developer is reviewing the code. Figure 3 shows TAB characters used instead of spaces and a space added before the TAB. There is no visual difference in the IDE and no real issue.
Figure 3.
The screenshot in Figure 4 is taken from a diff on Github. Note Line 10 now shows a space added before the tab and the line is considered changed.
PSR 12 Extended Coding Style Standard
Figure 4.
Again, this isn’t terrible either, but it highlights the potential confusion of mixing spaces and tabs. While you may not see it in the IDE, the diff tool noticed.
Conclusion
I have only highlighted the rules from PSR-12 that I believe can impact the developer’s experience when it comes to tooling and collaboration. There are many more rules in PSR-12[1] that I recommend reviewing. Discuss them with your team to decide which ones to follow, and write up your own style guide for new members. If it’s in the team style guide, it’s established, and development can continue. This up-front work eliminates wasting time and effort debating whether a rule from PSR-12 should be implemented. Admittedly, styling PSRs are not as impactful as other PSRs, but they are significant in terms of collaboration and should be considered just as important.
Frank Wallen is a PHP developer and tabletop gaming geek (really a gamer geek in general). He is also the proud father of two amazing young men and grandfather to two beautiful children who light up his life. He lives in Southern California and hopes to one day have a cat again. @frank_wallen
Related Reading
• PSRs - Improving the Developer Experience by Frank Wallen, March 2022. https://phpa.me/wallen-mar-2022
- Mentoring and Teaching PHP by Ken Marks, July 2021
https://phpa.me/teaching-php-2021
Figure 4.
- Working with PHP Streams by Chris Tankersley, January 2021 http://phpa.me/education-jan-21
1 PSR-12: https://www.php-fig.org/psr/psr-12/
Figure 3.
Tech is Taking Sides
Beth Tucker Long
Throughout history, industries have stayed relatively neutral during wartime. Global companies, especially, may offer marketing-focused messages of hope and concern but keep their heads down and their tones neutral when faced with actually taking a stand against one side of a conflict. Per usual, though, the tech industry is happy to disrupt the status quo—not just taking a clear stand but putting their money and their talent where their mouth is.
Global industries have long been accused of profiting off of war by selling to all sides, especially those in manufacturing and raw materials. Tech companies around the world, though, are taking a defined stand and even putting aside opportunities to profit as they take sides in the current conflict between Ukraine and Russia. SpaceX has donated Starlink Internet communications systems to keep people in Ukraine connected to the internet. AT&T, Verizon, T-Mobile, and Vodafone have all waived fees for services involving Ukraine—some waiving roaming charges for customers in Ukraine, some waiving international call fees for calls to and from Ukraine. Apple has halted all sales through its Russian online store, shut down Apply Pay access in Russia, and has disabled traffic and incident reports within Ukraine in Apple Maps. Google, likewise, has disabled live traffic reports for Ukraine in Google Maps, blocked Russian state-sponsored media channels on YouTube, and paused ad sales in Russia. Microsoft has stopped sales to Russia, and even Netflix has gotten involved, suspending their services in Russia. Even companies that are trying to just carry on with business-as-usual are having trouble staying uninvolved. Despite not taking a public stand in the matter, Coinbase, Patreon, and many other fundraising, financial, and social media platforms have found themselves in the spotlight as their platforms are being used heavily to support various organizations involved in the Russia-Ukraine conflict. It is no longer easy to keep your head down and stay neutral. Outside of work, the tech community is also getting involved. Over 300,000 programmers have volunteered to become an “IT Army” on behalf of Ukraine, taking down key Russian websites and online services through coordinated DDoS attacks, reporting disinformation on social media platforms, and creating digital marketing campaigns to bring awareness to the conflict. Anonymous has claimed credit for hacking Russia’s television networks to broadcast footage from Ukraine and for accessing the corporate systems of Rosatom, Russia’s state corporation for nuclear energy, after they claimed ownership of a Ukrainian nuclear power plant. Regardless of the outcome of this conflict, it is no longer isolated and localized. People around the world are getting
involved. Strangers are fighting together while sitting in their living rooms halfway around the world from each other. One thing is abundantly clear—tech has drastically changed the landscape of war.
Beth Tucker Long is a developer and owner at Treeline Design, a web development company, and runs Exploricon, a gaming convention, along with her husband, Chris. She leads the Madison Web Design & Devel- opment and Full Stack Madison user groups. You can find her on her blog (http://www. alittleofboth.com) or on Twitter @e3BethT
The book also walks you through building a typical Create-Read- Update-Delete (CRUD) application. Along the way, you’ll get solid, practical advice on how to add authentication, handle file uploads, safely store passwords, application security, and more.
Available in Print+Digital and Digital Editions.
Learn how to build dynamic and secure websites.
The book also walks you through building a typical Create-Read- Update-Delete (CRUD) application. Along the way, you’ll get solid, practical advice on how to add authentication, handle file uploads, safely store passwords, application security, and more.
Purchase Your Copy
https://phpa.me/php-development-book
SECRET WEAPON!
Exception, uptime, and cron monitoring, all in one place
and easily installed in your web app. Deploy with confidence and be your team’s devops hero.
Are exceptions all that keep you up at night?
Honeybadger gives you full confidence in the health of your production systems.
DevOps monitoring, for developers. gasp!
Deploying web applications at scale is easier than it has ever been, but monitoring them is hard, and it’s easy to lose sight of your users. Honeybadger simplifies your production stack by combining three of the most common types of monitoring into a single, easy to use platform.
Exception Monitoring Uptime Monitoring Check-In Monitoring Delight your users by Know when your external Know when your background proactively monitoring for services go down or have jobs and services go missing and fixing errors. other problems. or silently fail.