These posts chronicle the development of StandardsView, and in some cases contain information and discussion not found on the site. The posts are in reverse chronological order. Embedding isn’t working out well so I’m copying and pasting the text.
February 14, 2026
Second of two posts. In preparing the PE Table for StandardsView, I realized that the grade-band/grade-level codes, both the official NGSS codes and codes that I invented, are less useful than they should be, because they do not sort as one would often like in education–with little kids on one end and big kids on the other. In other words, “K-2”, “3-5”, “6-8”, “9-12” are sorted by Excel, by default, as “3-5”, “6-8″, 9-12”, “K-2”. Now, I know that Excel’s Custom Sort is a thing, and I believe you when you say you could whip up a Python script to handle it, but that’s missing the point. The point, according to the Standards as Tools philosophy, is that the amount of work that you should have to do, to sort the little kids on one end and the big kids on the other, is zero.
(The PE Table sorts as one would like because it uses a custom sort behind the scenes.)
This is a bummer because I was proud of the grade-level code scheme I settled on: K2, 35, MS, HS. All have two characters and there are no hyphens. But it sorts even worse than the hyphenated ranges above: 35, HS, K2, MS. A whiplash sort!
So now the StandardsView design and development teams are mustering the energy to redo our codes. My current thought, for grade-bands, is 0002, 0305, 0608, 0912. “00” is kindergarten; “02” is second grade, and so forth. Rework is unfortunate but when we find mistakes or other infelicities, we do our best to address them.
[I couldn’t resist: I defined a goodness-of-sort by assigning the ranges the values K2=1, 35=2, MS=3, HS=4. Then I took the difference of successive values and summed the absolute values of those differences. That’s the goodness-of-sort. Lowest is best, and for the desired sort, the three differences are 1, 1, 1, and the sum is 3. Question: what’s the worst possible sort? Answer. That given by my codes. The sequence is 2, 4, 1, 3, the three differences are 2, -3, 2, and the goodness-of-sort is 7. That’s the worst you can do, according to Claude (I haven’t proven it). An equally heinous sort is MS, K2, HS, 35, but that’s not how Excel sorts.]
February 14, 2026
First of two posts. In a recent response to a commenter at StandardsView, I wrote “It would be great to find that someone has already collected the existing clarification statements and assessment boundaries [of the NGSS] into a useful whole.” This was unintentionally pretty funny, because of course there is such a person, and that person is me–it’s what the site is all about. More precisely, the site already rests on a machine-manipulable rendition of that information. Getting to this point was the hard part. Now the question is, what’s the best way to display the info to address the question(s) at hand?
Here is a start. The PE Table enables one to select from 49 (!) pieces of information related to an NGSS performance expectation and display the selected pieces in a spreadsheet-like table. The columns can be sorted and filtered, and the cells can be expanded. In this way, for example, you can compare all the assessment boundaries for PEs related to force and motion.
The table can be exported as JSON or csv. It’s at https://lnkd.in/eM-Vmt3z. I hope you find it useful.
February 7, 2026
The StandardsView website now allows the user to view the design space of Disciplinary Core Ideas, Science and Engineering Practices, Crosscutting Concepts, and Grade Bands, and locate the existing NGSS performance expectations within that space.
The viewer sets up a grid consisting of the DCI component ideas for a particular discipline, crossed with either the SEP, the CCC, or the grade bands. The existing PE are displayed in the cells of this grid. The two dimensions not part of the grid are available as filters. You’ll notice the viewer does not show the fundamental cells of the design space: DCI subidea x SEP subpractice x CCC subconcept x grade band. That would be of limited use, because the vast majority of these cells are empty. (The matrix is sparse.)
The implementation is a simple CSS grid. Much more sophisticated solutions are readily available. Even in this simple form, however, the viewer demonstrates an essential capability of any product built on a multidimensional knowledge space.
February 4, 2026
LinkedIn articles sound great, but something about the implementation is off. They are difficult to find. Accordingly, I’ve copied my LinkedIn articles to https://lnkd.in/e7_y4Wtq. There are three articles related to physics in various K-12 science standards, an article on heredity and Bayesian statistics, and the article presaging StandardsView that argues that science standards should be expressed as a database.
In other StandardsView news, I would like to share an eye-popping value that I discovered while developing the site. I present it, for now, without discussion of implications.
NGSS Performance Expectations (PE) are built from SEP subpractices, DCI subideas, and CCC subconcepts (“subpractices”, “subideas”, and “subconcepts” are my vocabulary). Here are the number of these entities in each grade band:
K-2: 44 SEP subpractices, 67 DCI subideas, 11 CCC subconcepts
3-5: 49, 102, 16
MS: 59, 149, 25
HS: 56, 160, 29
Making the simplifying assumption that each PE draws from exactly one of each, the number of possible PE at each grade band is
K-2: 32,428
3-5: 79,968
MS: 219,775
HS: 259,840
and the total number of possible PE is 592,011. There are 208 official NGSS PE. 208 divided by 592,011 is 0.00035, or 0.035%.
January 31, 2026
Via PE Maker, the StandardsView website now gives the user the ability to generate and export a simple performance expectation from any DCI subidea, SEP subpractice, and CCC subconcept. Though the current implementation is on a small scale–one PE at a time–it is important as an example of a capability needed to authentically engage with a multidimensional knowledge space.
PE Maker employs an AI tool, Claude, to suggest language for the performance expectation. It occurred to me that this targeted, compact use of AI may be well-suited for testing. To that end, I’ve provided the prompt and examples sent to Claude. If you have ideas for improving the PE language, or for any other interesting experiments, please let me know, here or on the site.
January 29, 2026
I was pleased to see this article (unfortunately maybe paywalled), https://lnkd.in/eRpAvKk2, discussing replacing pdfs. Factify’s CEO, Matan Gavish, makes the point that AI and large language models have trouble reading pdfs because they are coded for printers and screens, not for machine processing.
I’ve been living this for the past month or so, as Claude (Code) and I tried to extract the components of the K-12 Framework and NGSS from various pdfs to populate StandardsView (standards.reganes.com). It was much more difficult than I anticipated, but I took solace in the fact that if the Standards as Tools mindset takes hold, no one will ever have to do it again. Standards will be provided as JSON, or csv, or some other format that helps people do work.
January 25, 2026
I’m pleased to announce StandardsView, an evolving model for the next set of K-12 science standards, Standards2030.
StandardsView uses the NGSS as a vehicle for the Standards as Tools philosophy. Accordingly, StandardsView offers the components (SEP, DCI, and so forth) of the NGSS for download, including around 200 DCI subideas not used in performance expectations.
Please take a look and most importantly, participate in developing Standards as Tools. How could standards help you do your work? Join the conversation and submit ideas. I’ll implement them in StandardsView, if I can.
StandardsView is at standards.reganes.com.