What this package does
qviewparsR is a pure-R parser for the binary
.Q-View project file format used in chemiluminescent
multiplex ELISA plate imaging and quantification. A single
.Q-View file bundles:
- a plain-text manifest header,
- an embedded H2 SQL database (with project metadata, the analyte panel, plate geometry, sample-to-well assignments, replicate pixel intensities, and – if the user generated a report inside the producing application – a fully rendered CSV report stored as a CLOB), and
- one or more binary LOB segments holding the raw chemiluminescent plate images.
read_qview() extracts everything except the raw images
and returns it as a list of tidy tibbles, with no Java runtime, no H2
database driver, and no compiled code anywhere in the package.
Installation
# install.packages("pak")
pak::pak("CTTIR/qviewparsR")qviewparsR requires R >= 4.1.0 plus a small set of
tidyverse-aligned dependencies (cli, dplyr,
lifecycle, openxlsx2, readr,
rlang, tibble, tidyr). Plotting
requires ggplot2 (Suggested); the Shiny front-end
additionally requires shiny, bslib, and
DT.
A complete walk-through
The end-to-end workflow is short:
library(qviewparsR)
qv <- read_qview("path/to/plate.Q-View")
qv # one-screen summary
qv$analytes # spot_number, analyte, unit, lod, lloq, uloq
qv$well_groups # one row per sample/calibrator/control
qv$pixel_intensities # long-format replicate readings
qv$summary_statistics # per-group mean / std-dev / CV rows
qv$plate_layout # one row per plate well
summary(qv) # mean / SD / CV per well type x analyteread_qview() always returns a list of class
qview with eleven slots described in
?read_qview. Empty slots are zero-row tibbles rather than
NULL, so downstream code can rely on shape stability.
The naming convention
The producing software rewrites identifiers from the original well-assignment template CSV before it stores them. The mapping is systematic and reversible:
| Template value | Stored as |
|---|---|
Cal 1 … Cal N
|
ICal 1 … ICal N
|
Low |
GLow |
High |
HHigh |
FD24277364, 1211498458, … |
NFD24277364, N1211498458, … |
strip_qview_prefix() reverses the rewrite. Pass
strip_prefix = TRUE to read_qview() to apply
it across every sample-id column at once:
qv <- read_qview("path/to/plate.Q-View", strip_prefix = TRUE)
unique(qv$well_groups$sample_id)The vectorised helper is also useful on its own:
strip_qview_prefix(c("ICal 1", "GLow", "HHigh", "NFD24277364"))
#> [1] "Cal 1" "Low" "High" "FD24277364"Coercion and tidy-data idioms
as_tibble() returns the long-format
pixel_intensities table – the primary tabular payload – so
a parsed object can drop straight into a dplyr / ggplot2 pipeline:
library(dplyr)
library(tibble)
qv |>
as_tibble() |>
filter(replicate == 1L) |>
group_by(analyte, unit) |>
summarise(median_pi = median(pixel_intensity, na.rm = TRUE),
.groups = "drop")The is_qview() predicate lets package-aware functions
guard their inputs:
Visualisation
Three quick-look plot types are built in (ggplot2 required):
plot(qv, type = "plate_map") # 96-well plate, fill = well type
plot(qv, type = "intensity_heatmap") # facet per analyte, fill = PI
plot(qv, type = "replicate_scatter") # rep 1 vs rep 2 per analyteEach call returns a ggplot object, so themes, scales,
and labels can be added on top:
Exporting
Three writers cover the common destinations. All return the parsed
object invisibly, so they compose with |>:
qv |>
write_qview_xlsx("plate.xlsx") |> # one sheet per parsed table
write_qview_csv ("plate_csv/") |> # one CSV file per parsed table
write_qview_rds ("plate.rds") # full lossless R round-tripAdd overwrite = TRUE to write_qview_xlsx()
/ write_qview_rds() to replace existing destinations.
The legacy aliases qview_to_xlsx() and
qview_to_csv_dir() still work but are flagged with
lifecycle::deprecate_warn() and will be removed in a future
release.
Cross-validating against a template CSV
Every well assignment is already embedded in the Q-View file. If you
also have the original well-assignment template CSV the producing
application imported, read_qview_template() parses it into
a tibble that aligns with qv$plate_layout for
cross-validation:
tmpl <- read_qview_template("path/to/template.csv")
qv$plate_layout |>
dplyr::left_join(tmpl, by = "well", suffix = c("_qview", "_template")) |>
dplyr::filter(sample_id_qview != sample_id_template)Any rows surviving the filter expose template-vs-Q-View mismatches.
Interactive front-end
For non-coding collaborators, qview_app() launches a
small Shiny application with the same workflow exposed visually:
The app accepts a .Q-View upload (and optionally a
template CSV), shows every parsed table and the three plots in tabs, and
offers one-click downloads as xlsx, rds, or a
zipped CSV directory.
Error handling
Every exported function validates its inputs early and raises a
structured cli::cli_abort() error that points at the user’s
call, not at internal helpers. Typical shapes you may see:
Error in `read_qview()`:
! `path` must be an existing file.
x "missing.Q-View" does not exist.
Error in `read_qview()`:
! `path` is not a valid `.Q-View` project file.
x "junk.bin" is missing the expected container header.
i Expected a numeric container version followed by "Q-View Project".
The messages carry an i bullet whenever there is an
actionable hint.
Where to go next
-
?read_qview– exhaustive description of every output slot. -
?write_qview– the three exporters share one help page. -
?summary.qview– detail on the per-well-type aggregation. -
?qview_app– launching the interactive app.
The inst/extdata/ directory does not
ship a Q-View fixture because the binary format is large; instead, the
test suite skips cleanly when no fixture is present
(tests/testthat/test-read_qview.R documents the lookup
paths).