Compare commits

...

144 commits

Author SHA1 Message Date
965aa310ca
new comment on build
Some checks failed
R-CMD-check / macos-latest (release) (push) Has been cancelled
R-CMD-check / ubuntu-latest (oldrel-1) (push) Has been cancelled
R-CMD-check / ubuntu-latest (release) (push) Has been cancelled
R-CMD-check / windows-latest (release) (push) Has been cancelled
pkgdown / pkgdown (push) Has been cancelled
test-coverage.yaml / test-coverage (push) Has been cancelled
2025-03-10 11:37:59 +01:00
b512e6a570
new release 2025-03-10 10:35:13 +01:00
ff466c044c
fixing a bug when not exporting from the first instrument and pivoting to wide format 2025-03-07 16:01:12 +01:00
821e4583dd
ready for cran 2025-03-05 14:39:29 +01:00
58e63eb1cf
reversed metadata focus move 2025-03-05 14:38:15 +01:00
10064d7ee0
ready for cran 2025-03-05 13:41:08 +01:00
0b5319f647
allows not splitting data 2025-03-05 13:40:56 +01:00
2e1e7822a4
Interprets logicals 2025-03-05 13:40:40 +01:00
c9ee46f6a4
more data formats to export (semi-)long data 2025-03-05 13:40:23 +01:00
3ae16b767f
bug 2025-03-04 14:00:00 +01:00
3c4b132fb4
interpret single level vectors correctly 2025-03-04 13:54:58 +01:00
bb24a7d7bd
new as_logical function to ease binary data interpretation - version bump. Hi March! 2025-03-04 13:00:49 +01:00
f91aed0948
version 2025-02-25 10:50:19 +01:00
319ccfd9dd
updated covr action 2025-02-25 10:45:51 +01:00
7dfbb9b549
now interprets empty variables with empty levels attribute as logicals to avoid returning factors with empty levels 2025-02-25 10:36:37 +01:00
3eea26223b
R version dependency 2025-01-29 14:18:48 +01:00
0e900a2776
updated for release 2025-01-29 14:09:00 +01:00
7bbc147304
new docs 2025-01-29 11:17:49 +01:00
8d20901636
cleaning and fixes for a minor release 2025-01-29 10:04:38 +01:00
7d82eeebd4
fct_drop refined 2024-12-19 21:12:56 +01:00
f22a0a56b2
use correct factor order 2024-12-04 08:05:56 +01:00
149c2747f4
better test 2024-12-04 08:05:45 +01:00
7f04fafd9b
addition to correctly format factors for upload 2024-12-04 07:35:54 +01:00
6223d2063c
removed uri 2024-12-02 12:38:03 +01:00
cfc441120f
version bump and release to CRAN 2024-12-02 08:02:43 +01:00
c52fd2947c
quick and working sollution to get variable suffixes in the tables. included in the easy_redcap() when widening 2024-11-28 21:00:28 +01:00
4ac9282c8f
spelling 2024-11-28 14:34:54 +01:00
30d82e5288
new vignette on getting started 2024-11-28 14:33:20 +01:00
f431931e86
adjusted with a couple of flags 2024-11-28 14:33:03 +01:00
9390735af3
new tests 2024-11-28 14:32:30 +01:00
2aa268f747
support labelled data 2024-11-28 14:32:03 +01:00
5926c12da6
adjusted docs 2024-11-28 14:31:27 +01:00
ea26d18c43
adjusted docs 2024-11-28 14:30:53 +01:00
053c4447ad
include data... 2024-11-27 15:49:45 +01:00
21f7b0cb83
suppressing warnings in test 2024-11-27 15:49:35 +01:00
87505daeeb
on shinyapps.io and running. woop woop! 2024-11-27 10:35:40 +01:00
d8ca1d9eb1
update 2024-11-27 10:09:46 +01:00
80328d6e9a
new helper functions for data labelling based on data dictionary 2024-11-27 09:56:32 +01:00
9cae725de2
extension to forcats::fct_drop to perform across data.frame 2024-11-27 09:56:06 +01:00
daf0e7852f
extend to work across data.frames labelled as redcapcast_labelled, haven_labelled or labelled 2024-11-27 09:55:41 +01:00
d1425aaac0
adjusting 2024-11-27 09:54:38 +01:00
2ba46e8e7a
added option to export "both" raw and label by labelling raw data to preserve as much information as possible 2024-11-27 09:51:51 +01:00
57f9f23ece
restructuring 2024-11-27 07:48:10 +01:00
99cce26753
bug hunting 2024-11-27 07:42:03 +01:00
4ad21c7f57
restructuring 2024-11-26 14:46:22 +01:00
21c2dc0444
New function to export redcap data with labels 2024-11-26 14:46:11 +01:00
f1e67b52ab
cleaning 2024-11-26 14:44:51 +01:00
45315080c5
revert 2024-11-25 10:12:35 +01:00
c6cbb4abc8
down numbering to trying to solve issue for now 2024-11-25 10:04:00 +01:00
eab88d562a
clean 2024-11-25 09:54:47 +01:00
637e950dc8
first bug 2024-11-25 09:39:45 +01:00
bae5b6d2ec
missed loading REDCapCAST library or reference function in ui.R - bummer 2024-11-25 09:13:10 +01:00
90f0a9d382
cran 2024-11-22 14:02:09 +01:00
1f659c5bd9
reworded title 2024-11-22 13:06:54 +01:00
5e064523f7
spelling 2024-11-22 13:06:02 +01:00
1683203ac3
news; heading for CRAN 2024-11-22 13:00:17 +01:00
ae1c120cd8
cleaning 2024-11-21 14:22:36 +01:00
40d95e41c3
minor adjustments and bug fixing 2024-11-21 11:18:38 +01:00
f094394933
moving functions from app to fix dependencies 2024-11-20 16:25:26 +01:00
a896bf4e76
documented preview functions and included gt 2024-11-20 16:15:41 +01:00
47fb3fceca
small adjustments to interpret character vectors of roman numerals as numeric vector 2024-11-20 15:23:31 +01:00
18544ddcfe
complete copy/paste from forcats and haven 2024-11-20 14:31:01 +01:00
8aa1ec41dc
clean 2024-11-20 13:02:46 +01:00
6fb55fd2cc
trial and error 2024-11-20 12:41:48 +01:00
c86ae9a364
executing examples with as_factor() errors. I think due to redundancy. Will investigate. 2024-11-20 12:40:29 +01:00
69e1520aff
specify ID column 2024-11-20 12:10:33 +01:00
0600adcce7
updated docs 2024-11-20 12:10:07 +01:00
91d41d975a
documentation 2024-11-20 12:09:53 +01:00
b7e0873b00
better handling intrument export 2024-11-20 12:09:30 +01:00
c3b54b0860
as_factor functions to preserve attributes 2024-11-20 12:09:13 +01:00
42efec437a
keeping pegeler as coauthor 2024-11-19 13:10:01 +01:00
942b3098cc
updated docs 2024-11-19 13:09:10 +01:00
f5965a2748
implemented specification of categorical variables (logicals are converted to factor) 2024-11-19 12:55:09 +01:00
fe9918dc10
implement support for variable attributes for field label incl conversion of logicals to factor 2024-11-19 12:54:26 +01:00
f2b2784547
adding as exported element 2024-11-18 16:59:06 +01:00
3590a9e216
formatting 2024-11-18 16:50:03 +01:00
4e7af7d01f
updated vignette and formatting 2024-11-18 16:26:10 +01:00
0c3286cb2f
news 2024-11-18 16:14:45 +01:00
9d53f84427
docs and name update 2024-11-18 14:41:44 +01:00
9a069a422f
use tools file_ext instead of own 2024-11-18 14:41:30 +01:00
954f58bf1d
test data updated 2024-11-18 14:40:47 +01:00
ea08a2066f
all data parsing and formatting has been seperated in individual functions 2024-11-18 14:40:32 +01:00
4911d4dbc8
renamed 2024-11-18 10:54:24 +01:00
d56fd81966
restructured readme for fork detach 2024-11-18 10:53:53 +01:00
927d485739
minor name edit 2024-11-18 08:17:55 +01:00
a518ada45b
new comment 2024-11-15 21:27:02 +01:00
9ae056abbb
spelling 2024-11-15 21:24:39 +01:00
4fb57bbeb3
ignore 2024-11-15 21:23:11 +01:00
d0dfaf70db
new ignore 2024-11-15 21:22:21 +01:00
597ed69783
citation 2024-11-15 20:59:07 +01:00
4b4f513956
adjusting sidebar width 2024-11-15 20:58:57 +01:00
d8d11f6da1
minor edits 2024-11-15 20:54:37 +01:00
a02f96828e
updates on shiny-cast 2024-11-15 20:47:23 +01:00
3cfdb66a32 included shiny app with package, bslib, documentation, fixing NA bug 2024-11-15 20:42:41 +01:00
Andreas Gammelgaard Damsbo
1189da6c86
Merge pull request #15 from agdamsbo/agdamsbo-patch-1
Update README.md
2024-10-25 10:25:44 +02:00
Andreas Gammelgaard Damsbo
dfd6690d3a
Update README.md 2024-10-25 10:25:18 +02:00
ff22ba05d8
disabling examples 2024-10-24 11:56:29 +02:00
28beea676c
preparing for next version 2024-10-24 11:41:48 +02:00
3e4b1b1549
exporting redcap instrument from shiny app 2024-10-24 11:37:40 +02:00
7f74ea5144
version bump. new spell check without tests 2024-10-03 09:24:18 +02:00
e389ec9c28
updated link 2024-10-02 12:52:18 +02:00
b95879ce01
updated comments 2024-10-02 10:28:33 +02:00
e9c8eced50
version bump 2024-10-02 10:04:39 +02:00
a84c528815
reinstall 2024-09-09 11:37:29 +02:00
e4ce26772c
trial.. 2024-09-09 11:15:12 +02:00
d9f49e51ce
please complete tests now 2024-09-09 11:01:46 +02:00
ce33650501
trying with renv again after multiple check failures on macos arm64 with RHUB 2024-09-09 09:45:53 +02:00
93c68d9f20
updated 2024-09-05 15:38:46 +02:00
db4bc4412b
removed hash matching from tests to try to handle test errors on different systems 2024-09-05 15:38:38 +02:00
c6f9737c91
rhub is back 2024-09-05 13:31:02 +02:00
85063839b1
trying to reset rhub 2024-09-05 13:30:02 +02:00
c7ab477203
renv out 2024-09-05 12:52:32 +02:00
7bbdf9b7fb
tested 2024-06-07 14:37:50 +02:00
3350050778
new 2024-06-07 14:20:37 +02:00
a341411cbf
ignere comments 2024-06-07 14:20:22 +02:00
c5ef79b97e
correct link 2024-06-07 14:18:43 +02:00
6abee8f78e
minor app updates 2024-06-07 12:01:09 +02:00
70f7ab188b
new comments 2024-06-07 11:22:41 +02:00
2697206272
minor 2024-06-07 11:16:58 +02:00
9378da535e
badge 2024-06-07 11:02:03 +02:00
bf9cf328cc
docs 2024-06-07 10:58:01 +02:00
4a56f4ec45 major update with new functions and renv is out! see NEWS section 2024-06-07 10:35:38 +02:00
Andreas Gammelgaard Damsbo
b35142f0cc
Merge pull request #13 from agdamsbo/agdamsbo-patch-3
Create FUNDING.yml
2024-06-07 10:19:02 +02:00
Andreas Gammelgaard Damsbo
104220d10d
Create FUNDING.yml 2024-06-07 10:14:42 +02:00
Andreas Gammelgaard Damsbo
e26efd01b4
Update README.md 2024-05-23 11:43:59 +02:00
04f5bec85c
major overhaul with new functions. docs are lacking 2024-05-02 13:31:21 +02:00
1fd3911974
added rhub v2 2024-04-12 14:37:41 +02:00
0d8aaee9ff
new additions 2024-04-12 12:20:13 +02:00
9a167e6110
updated to handle form names as variable name pre or suffix. prepared for shiny app extension 2024-04-12 12:19:56 +02:00
6343d68cb5
new function to create instrument meta data 2024-04-12 12:18:58 +02:00
23ebdb5ee7
fixed paste 2024-04-09 10:57:29 +02:00
4cd484bd6b
new version 2024-04-09 10:54:52 +02:00
1bdbb0df94
fix read_redcap_tables 2024-04-09 10:51:31 +02:00
0628bde488
added "shiny::" 2024-04-04 09:50:04 +02:00
a181a2816c
deploy_shiny() has moved to a different package 2024-03-14 09:42:44 +01:00
c54cea7be0
renv added to the last action runner 2024-03-14 09:33:13 +01:00
e4e4f5a6cc renv updated 2024-03-14 09:29:02 +01:00
cdb5311e8b ubuntu devel is left out as it keeps failing. renv setup included 2024-03-14 09:29:02 +01:00
8816618da8 dependencies removed and version bump 2024-03-14 09:29:02 +01:00
f787621f1b updated shiny app 2024-03-14 09:29:02 +01:00
Andreas Gammelgaard Damsbo
9c61b5e646
Merge pull request #9 from agdamsbo/agdamsbo-patch-2
Update CODE_OF_CONDUCT.md
2024-02-28 14:19:45 +01:00
Andreas Gammelgaard Damsbo
a6e6cefcb7
Update CODE_OF_CONDUCT.md 2024-02-28 14:19:29 +01:00
Andreas Gammelgaard Damsbo
b6c187bf54
Merge pull request #8 from agdamsbo/agdamsbo-patch-1
added hosted shiny app
2024-02-28 09:39:50 +01:00
Andreas Gammelgaard Damsbo
26ee8aa528
added hosted shiny app 2024-02-28 09:39:30 +01:00
128 changed files with 6872 additions and 1010 deletions

View file

@ -16,6 +16,8 @@
^cran-comments\.md$
^CRAN-SUBMISSION$
drafting
app
^\.lintr$
^CODE_OF_CONDUCT\.md$
^~/REDCapCAST/inst/shiny-examples/casting/rsconnect$
^inst/shiny-examples/casting/functions\.R$
^functions\.R$

View file

@ -1,2 +1 @@
_R_CHECK_FORCE_SUGGESTS_=FALSE
_R_CHECK_SYSTEM_CLOCK_=0

View file

@ -1,13 +1 @@
options(
renv.settings.snapshot.type = "explicit",
renv.config.auto.snapshot = TRUE,
renv.config.pak.enabled = TRUE,
rmarkdown.html_vignette.check_title = FALSE
)
source("renv/activate.R")
if (interactive()) {
suppressMessages(require(usethis))
}

14
.github/FUNDING.yml vendored Normal file
View file

@ -0,0 +1,14 @@
# These are supported funding model platforms
github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: agdamsbo
issuehunt: # Replace with a single IssueHunt username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
polar: # Replace with a single Polar username
buy_me_a_coffee: agdamsbo
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View file

@ -20,7 +20,7 @@ jobs:
config:
- {os: macos-latest, r: 'release'}
- {os: windows-latest, r: 'release'}
- {os: ubuntu-latest, r: 'devel', http-user-agent: 'release'}
# - {os: ubuntu-latest, r: 'devel', http-user-agent: 'release'}
- {os: ubuntu-latest, r: 'release'}
- {os: ubuntu-latest, r: 'oldrel-1'}
@ -39,6 +39,8 @@ jobs:
http-user-agent: ${{ matrix.config.http-user-agent }}
use-public-rspm: true
# - uses: r-lib/actions/setup-renv@v2
- uses: r-lib/actions/setup-r-dependencies@v2
with:
extra-packages: any::rcmdcheck

View file

@ -30,7 +30,7 @@ jobs:
with:
use-public-rspm: true
- uses: r-lib/actions/setup-renv@v2
# - uses: r-lib/actions/setup-renv@v2
- uses: r-lib/actions/setup-r-dependencies@v2
with:

95
.github/workflows/rhub.yaml vendored Normal file
View file

@ -0,0 +1,95 @@
# R-hub's generic GitHub Actions workflow file. It's canonical location is at
# https://github.com/r-hub/actions/blob/v1/workflows/rhub.yaml
# You can update this file to a newer version using the rhub2 package:
#
# rhub::rhub_setup()
#
# It is unlikely that you need to modify this file manually.
name: R-hub
run-name: "${{ github.event.inputs.id }}: ${{ github.event.inputs.name || format('Manually run by {0}', github.triggering_actor) }}"
on:
workflow_dispatch:
inputs:
config:
description: 'A comma separated list of R-hub platforms to use.'
type: string
default: 'linux,windows,macos'
name:
description: 'Run name. You can leave this empty now.'
type: string
id:
description: 'Unique ID. You can leave this empty now.'
type: string
jobs:
setup:
runs-on: ubuntu-latest
outputs:
containers: ${{ steps.rhub-setup.outputs.containers }}
platforms: ${{ steps.rhub-setup.outputs.platforms }}
steps:
# NO NEED TO CHECKOUT HERE
- uses: r-hub/actions/setup@v1
with:
config: ${{ github.event.inputs.config }}
id: rhub-setup
linux-containers:
needs: setup
if: ${{ needs.setup.outputs.containers != '[]' }}
runs-on: ubuntu-latest
name: ${{ matrix.config.label }}
strategy:
fail-fast: false
matrix:
config: ${{ fromJson(needs.setup.outputs.containers) }}
container:
image: ${{ matrix.config.container }}
steps:
- uses: r-hub/actions/checkout@v1
- uses: r-hub/actions/platform-info@v1
with:
token: ${{ secrets.RHUB_TOKEN }}
job-config: ${{ matrix.config.job-config }}
- uses: r-hub/actions/setup-deps@v1
with:
token: ${{ secrets.RHUB_TOKEN }}
job-config: ${{ matrix.config.job-config }}
- uses: r-hub/actions/run-check@v1
with:
token: ${{ secrets.RHUB_TOKEN }}
job-config: ${{ matrix.config.job-config }}
other-platforms:
needs: setup
if: ${{ needs.setup.outputs.platforms != '[]' }}
runs-on: ${{ matrix.config.os }}
name: ${{ matrix.config.label }}
strategy:
fail-fast: false
matrix:
config: ${{ fromJson(needs.setup.outputs.platforms) }}
steps:
- uses: r-hub/actions/checkout@v1
- uses: r-hub/actions/setup-r@v1
with:
job-config: ${{ matrix.config.job-config }}
token: ${{ secrets.RHUB_TOKEN }}
- uses: r-hub/actions/platform-info@v1
with:
token: ${{ secrets.RHUB_TOKEN }}
job-config: ${{ matrix.config.job-config }}
- uses: r-hub/actions/setup-deps@v1
with:
job-config: ${{ matrix.config.job-config }}
token: ${{ secrets.RHUB_TOKEN }}
- uses: r-hub/actions/run-check@v1
with:
job-config: ${{ matrix.config.job-config }}
token: ${{ secrets.RHUB_TOKEN }}

View file

@ -4,9 +4,10 @@ on:
push:
branches: [main, master]
pull_request:
branches: [main, master]
name: test-coverage
name: test-coverage.yaml
permissions: read-all
jobs:
test-coverage:
@ -15,7 +16,7 @@ jobs:
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- uses: r-lib/actions/setup-r@v2
with:
@ -23,28 +24,39 @@ jobs:
- uses: r-lib/actions/setup-r-dependencies@v2
with:
extra-packages: any::covr
extra-packages: any::covr, any::xml2
needs: coverage
- name: Test coverage
run: |
covr::codecov(
cov <- covr::package_coverage(
quiet = FALSE,
clean = FALSE,
install_path = file.path(Sys.getenv("RUNNER_TEMP"), "package")
install_path = file.path(normalizePath(Sys.getenv("RUNNER_TEMP"), winslash = "/"), "package")
)
print(cov)
covr::to_cobertura(cov)
shell: Rscript {0}
- uses: codecov/codecov-action@v4
with:
# Fail if error if not on PR, or if on PR and token is given
fail_ci_if_error: ${{ github.event_name != 'pull_request' || secrets.CODECOV_TOKEN }}
file: ./cobertura.xml
plugin: noop
disable_search: true
token: ${{ secrets.CODECOV_TOKEN }}
- name: Show testthat output
if: always()
run: |
## --------------------------------------------------------------------
find ${{ runner.temp }}/package -name 'testthat.Rout*' -exec cat '{}' \; || true
find '${{ runner.temp }}/package' -name 'testthat.Rout*' -exec cat '{}' \; || true
shell: bash
- name: Upload test results
if: failure()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: coverage-test-failures
path: ${{ runner.temp }}/package

5
.gitignore vendored
View file

@ -10,3 +10,8 @@ docs
drafting
\.DS_Store
.DS_Store
cran-comments.md
~/REDCapCAST/inst/shiny-examples/casting/rsconnect
~/REDCapCAST/inst/shiny-examples/casting/rsconnect/
inst/shiny-examples/casting/functions.R
functions.R

7
.lintr
View file

@ -1,7 +0,0 @@
linters: linters_with_defaults(
commented_code_linter = NULL
)
encoding: "UTF-8"
exclusions: list(
"drafting/"
)

View file

@ -118,7 +118,7 @@ version 2.1, available at
<https://www.contributor-covenant.org/version/2/1/code_of_conduct.html>.
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][https://github.com/mozilla/inclusion].
[Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/inclusion).
For answers to common questions about this code of conduct, see the FAQ at
<https://www.contributor-covenant.org/faq>. Translations are available at <https://www.contributor-covenant.org/translations>.

View file

@ -1,16 +1,18 @@
Package: REDCapCAST
Title: REDCap Castellated Data Handling
Version: 24.2.1
Title: REDCap Metadata Casting and Castellated Data Handling
Version: 25.3.2
Authors@R: c(
person("Andreas Gammelgaard", "Damsbo", email = "agdamsbo@clin.au.dk",
role = c("aut", "cre"),comment = c(ORCID = "0000-0002-7559-1154")),
person("Paul", "Egeler", email = "paulegeler@gmail.com", role = c("aut"),
comment = c(ORCID = "0000-0001-6948-9498")))
Description: Originally forked from the R part of 'REDCapRITS' by Paul Egeler.
Description: Casting metadata for REDCap database creation and handling of
castellated data using repeated instruments and longitudinal projects in
'REDCap'. Keeps a focused data export approach, by allowing to only export
required data from the database. Also for casting new REDCap databases based
on datasets from other sources.
Originally forked from the R part of 'REDCapRITS' by Paul Egeler.
See <https://github.com/pegeler/REDCapRITS>.
'REDCap' database casting and handling of castellated data when using
repeated instruments and longitudinal projects. Keeps a focused data export
approach, by allowing to only export required data from the database.
'REDCap' (Research Electronic Data Capture) is a secure, web-based software
platform designed to support data capture for research studies, providing
1) an intuitive interface for validated data capture; 2) audit trails for
@ -19,7 +21,7 @@ Description: Originally forked from the R part of 'REDCapRITS' by Paul Egeler.
4) procedures for data integration and interoperability with external
sources (Harris et al (2009) <doi:10.1016/j.jbi.2008.08.010>;
Harris et al (2019) <doi:10.1016/j.jbi.2019.103208>).
Depends: R (>= 3.4.0)
Depends: R (>= 4.1.0)
Suggests:
httr,
jsonlite,
@ -27,18 +29,17 @@ Suggests:
Hmisc,
knitr,
rmarkdown,
gt,
usethis,
ggplot2,
here,
styler,
devtools,
roxygen2,
spelling
spelling,
rhub,
rsconnect,
pkgconfig
License: GPL (>= 3)
Encoding: UTF-8
LazyData: true
RoxygenNote: 7.3.1
RoxygenNote: 7.3.2
URL: https://github.com/agdamsbo/REDCapCAST, https://agdamsbo.github.io/REDCapCAST/
BugReports: https://github.com/agdamsbo/REDCapCAST/issues
Imports:
@ -50,17 +51,34 @@ Imports:
purrr,
readr,
stats,
zip,
assertthat,
forcats,
vctrs,
gt,
bslib,
here,
glue,
gtsummary,
shiny,
haven,
openxlsx2,
rsconnect,
haven
readODS
Language: en-US
VignetteBuilder: knitr
Collate:
'REDCapCAST-package.R'
'utils.r'
'process_user_input.r'
'REDCap_split.r'
'ds2dd.R'
'as_factor.R'
'as_logical.R'
'doc2dd.R'
'ds2dd_detailed.R'
'easy_redcap.R'
'export_redcap_instrument.R'
'fct_drop.R'
'html_styling.R'
'mtcars_redcap.R'
'read_redcap_instrument.R'
'read_redcap_tables.R'
@ -68,5 +86,3 @@ Collate:
'redcapcast_data.R'
'redcapcast_meta.R'
'shiny_cast.R'
Language: en-US
VignetteBuilder: knitr

View file

@ -1,35 +1,91 @@
# Generated by roxygen2: do not edit by hand
S3method(as_factor,character)
S3method(as_factor,data.frame)
S3method(as_factor,factor)
S3method(as_factor,haven_labelled)
S3method(as_factor,labelled)
S3method(as_factor,logical)
S3method(as_factor,numeric)
S3method(as_logical,data.frame)
S3method(as_logical,default)
S3method(fct_drop,data.frame)
S3method(fct_drop,factor)
S3method(process_user_input,character)
S3method(process_user_input,data.frame)
S3method(process_user_input,default)
S3method(process_user_input,response)
export(REDCap_split)
export(all_na)
export(apply_factor_labels)
export(apply_field_label)
export(as_factor)
export(as_logical)
export(case_match_regex_list)
export(cast_data_overview)
export(cast_meta_overview)
export(char2choice)
export(char2cond)
export(clean_field_label)
export(clean_redcap_name)
export(compact_vec)
export(create_html_table)
export(create_instrument_meta)
export(cut_string_length)
export(d2w)
export(deploy_shiny)
export(doc2dd)
export(ds2dd)
export(ds2dd_detailed)
export(easy_redcap)
export(export_redcap_instrument)
export(fct2num)
export(fct_drop)
export(file_extension)
export(focused_metadata)
export(format_redcap_factor)
export(format_subheader)
export(get_api_key)
export(get_attr)
export(guess_time_only)
export(guess_time_only_filter)
export(haven_all_levels)
export(html_tag_wrap)
export(is.labelled)
export(is_repeated_longitudinal)
export(match_fields_to_form)
export(named_levels)
export(nav_bar_page)
export(numchar2fct)
export(parse_data)
export(possibly_numeric)
export(possibly_roman)
export(process_user_input)
export(read_input)
export(read_redcap_instrument)
export(read_redcap_tables)
export(redcap_wider)
export(sanitize_split)
export(server_factory)
export(set_attr)
export(shiny_cast)
export(split_non_repeating_forms)
export(strsplitx)
export(ui_factory)
export(suffix2label)
export(var2fct)
export(vec2choice)
importFrom(REDCapR,redcap_event_instruments)
importFrom(REDCapR,redcap_metadata_read)
importFrom(REDCapR,redcap_read)
importFrom(forcats,as_factor)
importFrom(forcats,fct_drop)
importFrom(haven,read_dta)
importFrom(keyring,key_get)
importFrom(keyring,key_list)
importFrom(keyring,key_set)
importFrom(openxlsx2,read_xlsx)
importFrom(purrr,reduce)
importFrom(readODS,read_ods)
importFrom(readr,parse_time)
importFrom(readr,read_csv)
importFrom(readr,read_rds)
importFrom(tidyr,pivot_wider)
importFrom(tidyselect,all_of)

104
NEWS.md
View file

@ -1,3 +1,103 @@
# REDCapCAST 25.3.2
* BUG: The `redcap_wider()` function would attempt to pivot empty selection of columns from list, and failing, causing all functions relying on this to fail. Fixed by filtering out data.frames in list with no additional columns than the "generics".
# REDCapCAST 25.3.1
* FIX: `as_factor()` now interprets empty variables with empty levels attribute as logicals to avoid returning factors with empty levels.
* NEW: `as_logical()`: interprets vectors with two levels as logical if values matches supplied list of logical pairs like "TRUE"/"FALSE", "Yes"/"No" or 1/2. Eases interpretation of data from databases with minimal metadata. Works on vectors and for data.frames. Interprets vectors with single value also matching to any of supplied levels (Chooses first match pair if several matches).
* NEW: `easy_redcap()`: new parameter `data_format` to specify data format as c("wide", "list", "redcap", "long"). For now "redcap" and "long" is treated equally. This was added to ease MMRM analyses. In that case, missing baseline values can be carried forward as "last observation carried forward" using the `tidyr::fill()` function specifying variables to fill. Interesting discussion on filling data [here on Stackoverflow](https://stackoverflow.com/a/13810615). `redcap_read_tables()` now has the option "none" for the `split_forms` parameter to allow not splitting the data.
* FIX: `ds2dd_detailed()`: The `convert_logicals` parameter has been turned off by default and logicals are now interpreted as field type "truefalse". Converting logicals to factors would result in the numeric values being 1 for FALSE and 2 for TRUE, which is opposite of the traditional notation and could lead to serous problems if not handled correctly. This should solve it.
# REDCapCAST 25.1.1
The newly introduced extension of `forcats::fct_drop()` has been corrected to work as intended as a method.
Conversion of column names to `field_names` are aligning better with REDCap naming.
Shorten variable names above 100 characters (REDCap criteria; note recommended variable name length is <26)
Fixed a params conflict in easy_redcap() when specifying raw_or_label.
# REDCapCAST 24.12.1
This release attempts to solve problems hosting the shiny_cast app, while also implementing functions to preserve as much meta data as possible from the REDCap database when exporting data.
The hosting on shinyapps.io has given a lot of trouble recently. Modified package structure a little around the `shiny_cast()`, to accommodate an alternative hosting approach with all package functions included in a script instead of requiring the package.
* NEW: A new option to `raw_or_label` in `read_redcap_tables()` has been added: "both". Get raw values with REDCap labels applied as labels. Use `as_factor()` to format factors with original labels and use the `gtsummary` package to easily get beautiful tables with original labels from REDCap. Use `fct_drop()` to drop empty levels.
* NEW: fct_drop() has been added with an extension to `forcats::fct_drop()`, that works across data.frames. Use as `fct_drop()`.
* CHANGE: the default data export method of `easy_redcap()` has been changed to use the new labelled data export with `read_redcap_tables()`.
# REDCapCAST 24.11.3
* BUG: shiny_cast() fails to load as I missed loading REDCapCAST library in ui.r. Fixed. Tests would be great.
# REDCapCAST 24.11.2
24.11.1 was rejected on CRAN based on wrong title capitalisation. This was an opportunity to extend the package overhaul. And this actually turned out to be a major step towards a very usable shiny app which have received most of the focus.
I have implemented option to specify categorical variables to factorize, but doing this with a modified version of {forcats} and {haven}'s `as_factor()`, that will preserve any attributes applied to the data to be able to upload and cast REDCap meta data from richly formatted data (use .rds). No matter the input type, all input is parsed using the default options from the {readr} package. Also to avoid mis-labelling, logicals are converted to factors as REDCap truefalse class follows different naming conversion compared to R. Also correct support for variable labels as field labels (use .rds formatted data and label with labelled::var_label())
Vignettes and documentation have been restructured.
This package has been detached from the REDCapRITS, which it was originally forked from. The data split function will be kept, while testing will be rewritten. This projects has evolved away from the original fork.
# REDCapCAST 24.11.1
Revised tests.
Documentation has been slightly updated to highlight the shiny app for casting REDCap metadata. I am working on hosting my own Shiny Server.
### Functions:
* Bug: 'form.name' specified to 'ds2dd_detailed()' was ignored. Corrected to only be ignored if 'form.sep' is specified. Added handling of re-occurring `form.sep` pattern.
* New: `export_redcap_instrument()` is a new version of `create_instrument_meta()`, that will only export a single instrument. Multiple instrument export can be done with `lapply()` or `purrr::map()`. This allows for inclusion of this functionality in the Shiny implementation and is easier to handle. `create_instrument_meta()` is deprecated.
* Improved: `shiny_cast()` app has been updated to actually work if you install the package and not clones the whole repository.
### Shiny:
* New: Major overhaul of the app interface with the introduction of `bslib` for building the page. Also Detailed documentation added for the app workflow.
* New: Export a REDCap instrument ready to add to your database based on an uploaded spreadsheet. This is thanks to the `export_redcap_instrument()` function. This functionality is intended for projects in production and adding instruments should be handled manually and not by API upload.
* Bug: Export datadictionary with "" instead of "NA" for NAs. Upload to REDCap failed. Not anymore.
The shiny implementation is included with this package. Implementing in shinylive may be looked into again later.
# REDCapCAST 24.10.3
Updated links and spelling.
# REDCapCAST 24.10.1
Minor changes to pass tests and renv is out. `rhub` is really not running as smooth as previously.
# REDCapCAST 24.6.1
### Functions
* Fix: `read_redcap_tables()`: field names testing allows to include "[form_name]_complete" fields.
* Fix: `ds2dd_detailed()`: default record ID name is now "record_id", the REDCap default. Default is still to use the first column name. Support was added to interpret column name prefix or suffix as instrument names. See the examples.
* New: `create_instrument_meta()`: creates zip with instrument files to allow adding new instruments to project in production. Takes data dictionary as input and creates a zip for each instrument specified by the `form_name` column.
* New: `doc2dd()`: function to convert document table to data dictionary. This allows to specify instrument or whole data dictionary in text document, which for most is easier to work with and easily modifiable. The generic case is a data frame with variable names as values in a column. This is a format like the REDCap data dictionary, but gives a few options for formatting. Has a few related functions for data handling and formatting. One interesting function is `case_match_regex_list()`, which allows for a dynamic `dplyr::case_when()`-like approach for regex-matching. I think it is neat at least.
### Documentation and more
* Dependencies: In order to deploy `shiny_cast()` with `shinylive`, I need to remove `curl` as a dependency. To accomplish this, the `shiny_deploy()` helper functions has been moved to the package [`project.aid`](https://github.com/agdamsbo/project.aid). This was before realising that `REDCapR` has `curl` as dependency, which is the culprit. `REDCapCAST` is not going to be a `shinylive` web-app without removing `REDCapR` dependency or any other REDCap database interaction, which would defy the purpose. I'll stick to hosted Shiny app instead.
# REDCapCAST 24.2.1
### Functions
@ -14,7 +114,7 @@
* NEW: `read_redcap_instrument()`: convenience function to retrieve complete instrument. Goes a little against the focused approach. With `REDCapR::redcap_read()` you can specify a form to download. You have to also specify the record id variable though. This is done for you with `read_redcap_instrument()`. Nothing fancy.
* NEW: `shiny_cast()`: [Shiny](https://www.rstudio.com/products/shiny/) application to ease the process of converting a spreadsheet/data set to a REDCap database. The app runs locally and data is transferred securely. You can just create and upload the data dictionary, but you can also transfer the given data in the same process. I plan to host the app with shinyapps.io, but for now you can run it locally.
* NEW: `shiny_cast()`: [Shiny](https://shiny.posit.co/) application to ease the process of converting a spreadsheet/data set to a REDCap database. The app runs locally and data is transferred securely. You can just create and upload the data dictionary, but you can also transfer the given data in the same process. I plan to host the app with shinyapps.io, but for now you can run it locally.
### Other
@ -85,7 +185,7 @@ The main goal this package is to keep the option to only export a defined subset
### Functions:
* `read_redcap_tables()` **NEW**: this function is mainly an implementation of the combined use of `REDCapR::readcap_read()` and `REDCap_split()` to maintain the focused nature of `REDCapR::readcap_read()`, to only download the specified data. Also implements tests of valid form names and event names. The usual fall-back solution was to get all data.
* `read_redcap_tables()` **NEW**: this function is mainly an implementation of the combined use of `REDCapR::redcap_read()` and `REDCap_split()` to maintain the focused nature of `REDCapR::redcap_read()`, to only download the specified data. Also implements tests of valid form names and event names. The usual fall-back solution was to get all data.
* `redcap_wider()` **NEW**: this function pivots the long data frames from `read_redcap_tables()` using `tidyr::pivot_wider()`.

7
R/REDCapCAST-package.R Normal file
View file

@ -0,0 +1,7 @@
#' @keywords internal
"_PACKAGE"
## usethis namespace: start
#' @importFrom openxlsx2 read_xlsx
## usethis namespace: end
NULL

View file

@ -11,11 +11,10 @@
#' \code{data.frame}, \code{response}, or \code{character} vector containing
#' JSON from an API call.
#' @param primary_table_name Name given to the list element for the primary
#' output table (as described in \emph{README.md}). Ignored if
#' \code{forms = 'all'}.
#' output table. Ignored if \code{forms = 'all'}.
#' @param forms Indicate whether to create separate tables for repeating
#' instruments only or for all forms.
#' @author Paul W. Egeler, M.S., GStat
#' @author Paul W. Egeler
#' @examples
#' \dontrun{
#' # Using an API call -------------------------------------------------------
@ -40,7 +39,7 @@
#' )
#'
#' # Convert exported JSON strings into a list of data.frames
#' REDCapRITS::REDCap_split(records, metadata)
#' REDCapCAST::REDCap_split(records, metadata)
#'
#' # Using a raw data export -------------------------------------------------
#'
@ -53,7 +52,7 @@
#' )
#'
#' # Split the tables
#' REDCapRITS::REDCap_split(records, metadata)
#' REDCapCAST::REDCap_split(records, metadata)
#'
#' # In conjunction with the R export script ---------------------------------
#'
@ -70,7 +69,7 @@
#' metadata <- read.csv("ExampleProject_DataDictionary_2018-06-03.csv")
#'
#' # Split the tables
#' REDCapRITS::REDCap_split(data, metadata)
#' REDCapCAST::REDCap_split(data, metadata)
#' setwd(old)
#' }
#' @return A list of \code{"data.frame"}s. The number of tables will differ
@ -87,6 +86,11 @@ REDCap_split <- function(records,
metadata,
primary_table_name = "",
forms = c("repeating", "all")) {
# Processing metadata to reflect focused dataset
# metadata <- focused_metadata(metadata, names(records))
# Requires new testing setup. Not doing that now.
# Process user input
records <- process_user_input(records)
metadata <-

477
R/as_factor.R Normal file
View file

@ -0,0 +1,477 @@
#' Convert labelled vectors to factors while preserving attributes
#'
#' This extends \link[forcats]{as_factor} as well as \link[haven]{as_factor}, by appending
#' original attributes except for "class" after converting to factor to avoid
#' ta loss in case of rich formatted and labelled data.
#'
#' Please refer to parent functions for extended documentation.
#' To avoid redundancy calls and errors, functions are copy-pasted here
#'
#' Empty variables with empty levels attribute are interpreted as logicals
#'
#' @param x Object to coerce to a factor.
#' @param ... Other arguments passed down to method.
#' @param only_labelled Only apply to labelled columns?
#' @export
#' @examples
#' # will preserve all attributes
#' c(1, 4, 3, "A", 7, 8, 1) |> as_factor()
#' structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10)
#' ) |>
#' as_factor() |>
#' dput()
#'
#' structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10),
#' class = "haven_labelled"
#' ) |>
#' as_factor() |> class()
#' structure(rep(NA,10),
#' class = c("labelled")
#' ) |>
#' as_factor() |> summary()
#'
#' rep(NA,10) |> as_factor()
#'
#' @importFrom forcats as_factor
#' @export
#' @name as_factor
as_factor <- function(x, ...) {
UseMethod("as_factor")
}
#' @rdname as_factor
#' @export
as_factor.factor <- function(x, ...) {
x
}
#' @rdname as_factor
#' @export
as_factor.logical <- function(x, ...) {
labels <- get_attr(x)
x <- factor(x, levels = c("FALSE", "TRUE"))
set_attr(x, labels, overwrite = FALSE)
}
#' @rdname as_factor
#' @export
as_factor.numeric <- function(x, ...) {
labels <- get_attr(x)
x <- factor(x)
set_attr(x, labels, overwrite = FALSE)
}
#' @rdname as_factor
#' @export
as_factor.character <- function(x, ...) {
labels <- get_attr(x)
if (possibly_roman(x)) {
x <- factor(x)
} else {
x <- structure(
forcats::fct_inorder(x),
label = attr(x, "label", exact = TRUE)
)
}
set_attr(x, labels, overwrite = FALSE)
}
#' @param ordered If `TRUE` create an ordered (ordinal) factor, if
#' `FALSE` (the default) create a regular (nominal) factor.
#' @param levels How to create the levels of the generated factor:
#'
#' * "default": uses labels where available, otherwise the values.
#' Labels are sorted by value.
#' * "both": like "default", but pastes together the level and value
#' * "label": use only the labels; unlabelled values become `NA`
#' * "values": use only the values
#' @rdname as_factor
#' @export
as_factor.haven_labelled <- function(x, levels = c("default", "labels", "values", "both"),
ordered = FALSE, ...) {
labels_all <- get_attr(x)
levels <- match.arg(levels)
label <- attr(x, "label", exact = TRUE)
labels <- attr(x, "labels")
if (levels %in% c("default", "both")) {
if (levels == "both") {
names(labels) <- paste0("[", labels, "] ", names(labels))
}
# Replace each value with its label
vals <- unique(vctrs::vec_data(x))
levs <- replace_with(vals, unname(labels), names(labels))
# Ensure all labels are preserved
levs <- sort(c(stats::setNames(vals, levs), labels), na.last = TRUE)
levs <- unique(names(levs))
x <- replace_with(vctrs::vec_data(x), unname(labels), names(labels))
x <- factor(x, levels = levs, ordered = ordered)
} else if (levels == "labels") {
levs <- unname(labels)
labs <- names(labels)
x <- replace_with(vctrs::vec_data(x), levs, labs)
x <- factor(x, unique(labs), ordered = ordered)
} else if (levels == "values") {
if (all(x %in% labels)) {
levels <- unname(labels)
} else {
levels <- sort(unique(vctrs::vec_data(x)))
}
x <- factor(vctrs::vec_data(x), levels, ordered = ordered)
}
x <- structure(x, label = label)
out <- set_attr(x, labels_all, overwrite = FALSE)
if (all_na(out) & length(levels(out))==0){
as_factor.logical(out)
} else {
out
}
}
#' @export
#' @rdname as_factor
as_factor.labelled <- as_factor.haven_labelled
#' @rdname as_factor
#' @export
as_factor.data.frame <- function(x, ..., only_labelled = TRUE) {
if (only_labelled) {
labelled <- vapply(x, is.labelled, logical(1))
x[labelled] <- lapply(x[labelled], as_factor, ...)
} else {
x[] <- lapply(x, as_factor, ...)
}
x
}
#' Tests for multiple label classes
#'
#' @param x data
#' @param classes classes to test
#'
#' @return logical
#' @export
#'
#' @examples
#' structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10),
#' class = "haven_labelled"
#' ) |> is.labelled()
is.labelled <- function(x, classes = c("haven_labelled", "labelled")) {
classes |>
sapply(\(.class){
inherits(x, .class)
}) |>
any()
}
replace_with <- function(x, from, to) {
stopifnot(length(from) == length(to))
out <- x
# First replace regular values
matches <- match(x, from, incomparables = NA)
if (anyNA(matches)) {
out[!is.na(matches)] <- to[matches[!is.na(matches)]]
} else {
out <- to[matches]
}
# Then tagged missing values
tagged <- haven::is_tagged_na(x)
if (!any(tagged)) {
return(out)
}
matches <- match(haven::na_tag(x), haven::na_tag(from), incomparables = NA)
# Could possibly be faster to use anyNA(matches)
out[!is.na(matches)] <- to[matches[!is.na(matches)]]
out
}
#' Get named vector of factor levels and values
#'
#' @param data factor
#' @param label character string of attribute with named vector of factor labels
#' @param na.label character string to refactor NA values. Default is NULL.
#' @param na.value new value for NA strings. Ignored if na.label is NULL.
#' Default is 99.
#' @param sort.numeric sort factor levels if levels are numeric. Default is TRUE
#'
#' @return named vector
#' @export
#'
#' @examples
#' structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10),
#' class = "haven_labelled"
#' ) |>
#' as_factor() |>
#' named_levels()
#' structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10),
#' class = "labelled"
#' ) |>
#' as_factor() |>
#' named_levels()
named_levels <- function(data, label = "labels", na.label = NULL, na.value = 99, sort.numeric=TRUE) {
stopifnot(is.factor(data))
if (!is.null(na.label)) {
attrs <- attributes(data)
lvls <- as.character(data)
lvls[is.na(lvls)] <- na.label
vals <- as.numeric(data)
vals[is.na(vals)] <- na.value
lbls <- data.frame(
name = lvls,
value = vals
) |>
unique() |>
(\(d){
stats::setNames(d$value, d$name)
})() |>
sort()
data <- do.call(
structure,
c(
list(.Data = match(vals, lbls)),
attrs[-match("levels", names(attrs))],
list(
levels = names(lbls),
labels = lbls
)
)
)
}
# Handle empty factors
if (all_na(data)) {
d <- data.frame(
name = levels(data),
value = seq_along(levels(data))
)
} else {
d <- data.frame(
name = levels(data)[data],
value = as.numeric(data)
) |>
unique() |>
stats::na.omit()
}
## Applying labels
attr_l <- attr(x = data, which = label, exact = TRUE)
if (length(attr_l) != 0) {
if (all(names(attr_l) %in% d$name)) {
d$value[match(names(attr_l), d$name)] <- unname(attr_l)
} else if (all(d$name %in% names(attr_l)) && nrow(d) < length(attr_l)) {
d <- data.frame(
name = names(attr_l),
value = unname(attr_l)
)
} else {
d$name[match(attr_l, d$name)] <- names(attr_l)
d$value[match(names(attr_l), d$name)] <- unname(attr_l)
}
}
out <- stats::setNames(d$value, d$name)
## Sort if levels are numeric
## Else, they appear in order of appearance
if (possibly_numeric(levels(data)) && sort.numeric) {
out <- out |> sort()
}
out
}
#' Test if vector can be interpreted as roman numerals
#'
#' @param data character vector
#'
#' @return logical
#' @export
#'
#' @examples
#' sample(1:100, 10) |>
#' as.roman() |>
#' possibly_roman()
#' sample(c(TRUE, FALSE), 10, TRUE) |> possibly_roman()
#' rep(NA, 10) |> possibly_roman()
possibly_roman <- function(data) {
if (all(is.na(data))) {
return(FALSE)
}
identical(as.character(data),
as.character(suppressWarnings(utils::as.roman(data))))
}
#' Allows conversion of factor to numeric values preserving original levels
#'
#' @param data vector
#'
#' @return numeric vector
#' @export
#'
#' @examples
#' c(1, 4, 3, "A", 7, 8, 1) |>
#' as_factor() |>
#' fct2num()
#'
#' structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10),
#' class = "haven_labelled"
#' ) |>
#' as_factor() |>
#' fct2num()
#'
#' structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10),
#' class = "labelled"
#' ) |>
#' as_factor() |>
#' fct2num()
#'
#' structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10)
#' ) |>
#' as_factor() |>
#' fct2num()
fct2num <- function(data) {
stopifnot(is.factor(data))
if (is.character(named_levels(data))) {
values <- as.numeric(named_levels(data))
} else {
values <- named_levels(data)
}
out <- values[match(data, names(named_levels(data)))]
## If no NA on numeric coercion, of original names, then return
## original numeric names, else values
if (possibly_numeric(names(out))) {
out <- as.numeric(names(out))
}
unname(out)
}
#' Tests if vector can be interpreted as numeric without introducing NAs by
#' coercion
#'
#' @param data vector
#'
#' @return logical
#' @export
#'
#' @examples
#' c("1","5") |> possibly_numeric()
#' c("1","5","e") |> possibly_numeric()
possibly_numeric <- function(data) {
suppressWarnings(
length(stats::na.omit(as.numeric(data))) ==
length(data)
)
}
#' Extract attribute. Returns NA if none
#'
#' @param data vector
#' @param attr attribute name
#'
#' @return character vector
#' @export
#'
#' @examples
#' attr(mtcars$mpg, "label") <- "testing"
#' do.call(c, sapply(mtcars, get_attr))
#' \dontrun{
#' mtcars |>
#' numchar2fct(numeric.threshold = 6) |>
#' ds2dd_detailed()
#' }
get_attr <- function(data, attr = NULL) {
if (is.null(attr)) {
attributes(data)
} else {
a <- attr(data, attr, exact = TRUE)
if (is.null(a)) {
NA
} else {
a
}
}
}
#' Set attributes for named attribute. Appends if attr is NULL
#'
#' @param data vector
#' @param label label
#' @param attr attribute name
#' @param overwrite overwrite existing attributes. Default is FALSE.
#'
#' @return vector with attribute
#' @export
#'
set_attr <- function(data, label, attr = NULL, overwrite = FALSE) {
# browser()
if (is.null(attr)) {
## Has to be a named list
## Will not fail, but just return original data
if (!is.list(label) | length(label) != length(names(label))) {
return(data)
}
## Only include named labels
label <- label[!is.na(names(label))]
if (!overwrite) {
label <- label[!names(label) %in% names(attributes(data))]
}
attributes(data) <- c(attributes(data), label)
} else {
attr(data, attr) <- label
}
data
}
#' Finish incomplete haven attributes substituting missings with values
#'
#' @param data haven labelled variable
#'
#' @return named vector
#' @export
#'
#' @examples
#' ds <- structure(c(1, 2, 3, 2, 10, 9),
#' labels = c(Unknown = 9, Refused = 10),
#' class = "haven_labelled"
#' )
#' haven::is.labelled(ds)
#' attributes(ds)
#' ds |> haven_all_levels()
haven_all_levels <- function(data) {
stopifnot(haven::is.labelled(data))
if (length(attributes(data)$labels) == length(unique(data))) {
out <- attributes(data)$labels
} else {
att <- attributes(data)$labels
out <- c(unique(data[!data %in% att]), att) |>
stats::setNames(c(unique(data[!data %in% att]), names(att)))
}
out
}

116
R/as_logical.R Normal file
View file

@ -0,0 +1,116 @@
#' Interpret specific binary values as logicals
#'
#' @param x vector or data.frame
#' @param values list of values to interpret as logicals. First value is
#' @param ... ignored
#' interpreted as TRUE.
#'
#' @returns vector
#' @export
#'
#' @examples
#' c(sample(c("TRUE", "FALSE"), 20, TRUE), NA) |>
#' as_logical() |>
#' class()
#' ds <- dplyr::tibble(
#' B = factor(sample(c(1, 2), 20, TRUE)),
#' A = factor(sample(c("TRUE", "FALSE"), 20, TRUE)),
#' C = sample(c(3, 4), 20, TRUE),
#' D = factor(sample(c("In", "Out"), 20, TRUE))
#' )
#' ds |>
#' as_logical() |>
#' sapply(class)
#' ds$A |> class()
#' sample(c("TRUE",NA), 20, TRUE) |>
#' as_logical()
#' as_logical(0)
#' @name as_logical
as_logical <- function(x,
values = list(
c("TRUE", "FALSE"),
c("Yes", "No"),
c(1, 0),
c(1, 2)
),
...) {
UseMethod("as_logical")
}
#' @rdname as_logical
#' @export
as_logical.data.frame <- function(x,
values = list(
c("TRUE", "FALSE"),
c("Yes", "No"),
c(1, 0),
c(1, 2)
),
...) {
as.data.frame(lapply(x, \(.x){
as_logical.default(x = .x, values = values)
}))
}
#' @rdname as_logical
#' @export
as_logical.default <- function(x,
values = list(
c("TRUE", "FALSE"),
c("Yes", "No"),
c(1, 0),
c(1, 2)
),
...) {
label <- REDCapCAST::get_attr(x, "label")
# browser()
out <- c()
if (any(
c(
"character",
"factor",
"numeric"
) %in% class(x)
)){
if (length(unique(x[!is.na(x)])) == 2) {
if (is.factor(x)) {
match_index <- which(sapply(values, \(.x){
all(.x %in% levels(x))
}))
} else {
match_index <- which(sapply(values, \(.x){
all(.x %in% x)
}))
}
} else if (length(unique(x[!is.na(x)])) == 1){
if (is.factor(x)) {
match_index <- which(sapply(values, \(.x){
any(.x %in% levels(x))
}))
} else {
match_index <- which(sapply(values, \(.x){
any(.x %in% x)
}))
}
} else {
match_index <- c()
}
if (length(match_index) == 1) {
out <- x == values[[match_index]][1]
} else if (length(match_index) > 1) {
# If matching several, the first match is used.
out <- x == values[[match_index[1]]][1]
}
}
if (length(out) == 0) {
out <- x
}
if (!is.na(label)) {
out <- REDCapCAST::set_attr(out, label = label, attr = "label")
}
out
}

313
R/doc2dd.R Normal file
View file

@ -0,0 +1,313 @@
utils::globalVariables(c("calculations", "choices"))
#' Doc table to data dictionary - EARLY, DOCS MISSING
#'
#' @description
#' Works well with `project.aid::docx2list()`.
#' Allows defining a database in a text document (see provided template) for
#' an easier to use data base creation. This approach allows easier
#' collaboration when defining the database. The generic case is a data frame
#' with variable names as values in a column. This is a format like the REDCap
#' data dictionary, but gives a few options for formatting.
#'
#' @param data tibble or data.frame with all variable names in one column
#' @param instrument.name character vector length one. Instrument name.
#' @param col.variables variable names column (default = 1), allows dplyr
#' subsetting
#' @param list.datetime.format formatting for date/time detection.
#' See `case_match_regex_list()`
#' @param col.description descriptions column, allows dplyr
#' subsetting. If empty, variable names will be used.
#' @param col.condition conditions for branching column, allows dplyr
#' subsetting. See `char2cond()`.
#' @param col.subheader sub-header column, allows dplyr subsetting.
#' See `format_subheader()`.
#' @param subheader.tag formatting tag. Default is "h2"
#' @param condition.minor.sep condition split minor. See `char2cond()`.
#' Default is ",".
#' @param condition.major.sep condition split major. See `char2cond()`.
#' Default is ";".
#' @param col.calculation calculations column. Has to be written exact.
#' Character vector.
#' @param col.choices choices column. See `char2choice()`.
#' @param choices.char.sep choices split. See `char2choice()`. Default is "/".
#' @param missing.default value for missing fields. Default is NA.
#'
#' @return tibble or data.frame (same as data)
#' @export
#'
#' @examples
#' # data <- dd_inst
#' # data |> doc2dd(instrument.name = "evt",
#' # col.description = 3,
#' # col.condition = 4,
#' # col.subheader = 2,
#' # col.calculation = 5,
#' # col.choices = 6)
doc2dd <- function(data,
instrument.name,
col.variables = 1,
list.datetime.format = list(
date_dmy = "_dat[eo]$",
time_hh_mm_ss = "_ti[md]e?$"
),
col.description = NULL,
col.condition = NULL,
col.subheader = NULL,
subheader.tag = "h2",
condition.minor.sep = ",",
condition.major.sep = ";",
col.calculation = NULL,
col.choices = NULL,
choices.char.sep = "/",
missing.default = NA) {
data <- data |>
dplyr::mutate(dplyr::across(dplyr::everything(), ~ dplyr::na_if(.x, c(""))))
## Defining the field name
out <- data |>
dplyr::mutate(
field_name = dplyr::pick(col.variables) |> unlist()
)
## Defining the field label. Field name is used if no label is provided.
if (is_missing(col.description)) {
out <- out |>
dplyr::mutate(
field_label = field_name
)
} else {
out <- out |>
dplyr::mutate(
field_label = dplyr::pick(col.description) |> unlist()
)
}
## Defining the sub-header
if (!is_missing(col.subheader)) {
out <- out |>
dplyr::mutate(
section_header = dplyr::pick(col.subheader) |>
unlist() |>
format_subheader(tag = subheader.tag)
)
}
## Defining the choices
if (is_missing(col.choices)) {
out <- out |>
dplyr::mutate(
choices = missing.default
)
} else {
out <- out |>
dplyr::mutate(
choices = dplyr::pick(col.choices) |>
unlist() |>
char2choice(char.split = choices.char.sep)
)
}
## Defining the calculations
if (is_missing(col.calculation)) {
out <- out |>
dplyr::mutate(
calculations = missing.default
)
} else {
# With inspiration from textclean package, curly apostrophe is replaced
out <- out |>
dplyr::mutate(
calculations = dplyr::pick(col.calculation) |>
unlist() |>
tolower() |>
replace_curly_quote()
)
}
## Merging choices and calculations, defining field type and setting form name
out <- out |>
dplyr::mutate(
select_choices_or_calculations = dplyr::coalesce(calculations, choices),
field_type = dplyr::case_when(!is.na(choices) ~ "radio",
!is.na(calculations) ~ "calc",
.default = "text"
),
form_name = instrument.name
)
## Defining branching logic from conditions
if (is_missing(col.condition)) {
out <- out |>
dplyr::mutate(
branching_logic = missing.default
)
} else {
out <- out |>
dplyr::mutate(
branching_logic = dplyr::pick(col.condition) |>
unlist() |>
char2cond(minor.split = condition.minor.sep,
major.split = condition.major.sep)
)
}
## Detecting data/time formatting from systematic field names
if (is.null(list.datetime.format)) {
out <- out |>
dplyr::mutate(
text_validation_type_or_show_slider_number = missing.default
)
} else {
out <- out |>
dplyr::mutate(
text_validation_type_or_show_slider_number = case_match_regex_list(
field_name,
list.datetime.format
)
)
}
## Selecting relevant columns
out <- out |>
dplyr::select(dplyr::any_of(names(REDCapCAST::redcapcast_meta)))
## Merging and ordering columns for upload
out |>
list(REDCapCAST::redcapcast_meta |> dplyr::slice(0)) |>
dplyr::bind_rows() |>
dplyr::select(names(REDCapCAST::redcapcast_meta))
}
#' Simple function to generate REDCap choices from character vector
#'
#' @param data vector
#' @param char.split splitting character(s)
#' @param raw specific values. Can be used for options of same length.
#' @param .default default value for missing. Default is NA.
#'
#' @return vector
#' @export
#'
#' @examples
#' char2choice(c("yes/no"," yep. / nope ","",NA,"what"),.default=NA)
char2choice <- function(data, char.split = "/", raw = NULL,.default=NA) {
ls <- strsplit(x = data, split = char.split)
ls |>
purrr::map(function(.x) {
if (is.null(raw)) {
raw <- seq_len(length(.x))
}
if (length(.x) == 0 | all(is.na(.x))) {
.default
} else {
paste(paste0(raw, ", ",trimws(.x)), collapse = " | ")
}
}) |>
purrr::list_c()
}
#' Simple function to generate REDCap branching logic from character vector
#'
#' @param data vector
#' @param .default default value for missing. Default is NA.
#' @param minor.split minor split
#' @param major.split major split
#' @param major.sep argument separation. Default is " or ".
#'
#' @return vector
#' @export
#'
#' @examples
#' #data <- dd_inst$betingelse
#' #c("Extubation_novent, 2; Pacu_delay, 1") |> char2cond()
char2cond <- function(data, minor.split = ",", major.split = ";", major.sep = " or ", .default = NA) {
strsplit(x = data, split = major.split) |>
purrr::map(function(.y) {
strsplit(x = .y, split = minor.split) |>
purrr::map(function(.x) {
if (length(.x) == 0 | all(is.na(.x))) {
.default
} else {
glue::glue("[{trimws(tolower(.x[1]))}]='{trimws(.x[2])}'")
}
}) |>
purrr::list_c() |>
glue::glue_collapse(sep = major.sep)
}) |>
purrr::list_c()
}
#' List-base regex case_when
#'
#' @description
#' Mimics case_when for list of regex patterns and values. Used for date/time
#' validation generation from name vector. Like case_when, the matches are in
#' order of priority.
#' Primarily used in REDCapCAST to do data type coding from systematic variable
#' naming.
#'
#' @param data vector
#' @param match.list list of case matches
#' @param .default Default value for non-matches. Default is NA.
#'
#' @return vector
#' @export
#'
#' @examples
#' case_match_regex_list(
#' c("test_date", "test_time", "test_tida", "test_tid"),
#' list(date_dmy = "_dat[eo]$", time_hh_mm_ss = "_ti[md]e?$")
#' )
case_match_regex_list <- function(data, match.list, .default = NA) {
match.list |>
purrr::imap(function(.z, .i) {
dplyr::if_else(grepl(.z, data), .i, NA)
}) |>
(\(.x){
dplyr::coalesce(!!!.x)
})() |>
(\(.x){
dplyr::if_else(is.na(.x), .default, .x)
})()
}
#' Multi missing check
#'
#' @param data character vector
#' @param nas character vector of strings considered as NA
#'
#' @return logical vector
is_missing <- function(data,nas=c("", "NA")) {
if (is.null(data)) {
TRUE
} else {
is.na(data) | data %in% nas
}
}
#' Replace curly apostrophes and quotes from word
#'
#' @description
#' Copied from textclean, which has not been updated since 2018 and is not
#' on CRAN. Github:https://github.com/trinker/textclean
#'
#' @param x character vector
#'
#' @return character vector
replace_curly_quote <- function(x){
replaces <- c('\x91', '\x92', '\x93', '\x94')
Encoding(replaces) <- "latin1"
for (i in 1:4) {
x <- gsub(replaces[i], c("'", "'", "\"", "\"")[i], x, fixed = TRUE)
}
x
}

View file

@ -1,89 +0,0 @@
utils::globalVariables(c("metadata_names"))
#' (DEPRECATED) Data set to data dictionary function
#'
#' @description
#' Creates a very basic data dictionary skeleton. Please see `ds2dd_detailed()`
#' for a more advanced function.
#'
#' @details
#' Migrated from stRoke ds2dd(). Fits better with the functionality of
#' 'REDCapCAST'.
#' @param ds data set
#' @param record.id name or column number of id variable, moved to first row of
#' data dictionary, character of integer. Default is "record_id".
#' @param form.name vector of form names, character string, length 1 or length
#' equal to number of variables. Default is "basis".
#' @param field.type vector of field types, character string, length 1 or length
#' equal to number of variables. Default is "text.
#' @param field.label vector of form names, character string, length 1 or length
#' equal to number of variables. Default is NULL and is then identical to field
#' names.
#' @param include.column.names Flag to give detailed output including new
#' column names for original data set for upload.
#' @param metadata Metadata column names. Default is the included
#' REDCapCAST::metadata_names.
#'
#' @return data.frame or list of data.frame and vector
#' @export
#'
#' @examples
#' redcapcast_data$record_id <- seq_len(nrow(redcapcast_data))
#' ds2dd(redcapcast_data, include.column.names=TRUE)
ds2dd <-
function(ds,
record.id = "record_id",
form.name = "basis",
field.type = "text",
field.label = NULL,
include.column.names = FALSE,
metadata = metadata_names) {
dd <- data.frame(matrix(ncol = length(metadata), nrow = ncol(ds)))
colnames(dd) <- metadata
if (is.character(record.id) && !record.id %in% colnames(ds)) {
stop("Provided record.id is not a variable name in provided data set.")
}
# renaming to lower case and substitute spaces with underscore
field.name <- gsub(" ", "_", tolower(colnames(ds)))
# handles both character and integer
colsel <-
colnames(ds) == colnames(ds[record.id])
if (summary(colsel)[3] != 1) {
stop("Provided record.id has to be or refer to a uniquely named column.")
}
dd[, "field_name"] <-
c(field.name[colsel], field.name[!colsel])
if (length(form.name) > 1 && length(form.name) != ncol(ds)) {
stop(
"Provided form.name should be of length 1 (value is reused) or equal
length as number of variables in data set."
)
}
dd[, "form_name"] <- form.name
if (length(field.type) > 1 && length(field.type) != ncol(ds)) {
stop(
"Provided field.type should be of length 1 (value is reused) or equal
length as number of variables in data set."
)
}
dd[, "field_type"] <- field.type
if (is.null(field.label)) {
dd[, "field_label"] <- dd[, "field_name"]
} else
dd[, "field_label"] <- field.label
if (include.column.names){
list("DataDictionary"=dd,"Column names"=field.name)
} else dd
}

View file

@ -1,8 +1,8 @@
utils::globalVariables(c(
"stats::setNames",
"field_name",
"field_type",
"select_choices_or_calculations"
"select_choices_or_calculations",
"field_label"
))
#' Try at determining which are true time only variables
#'
@ -97,6 +97,97 @@ hms2character <- function(data) {
dplyr::bind_cols()
}
#' (DEPRECATED) Data set to data dictionary function
#'
#' @description
#' Creates a very basic data dictionary skeleton. Please see `ds2dd_detailed()`
#' for a more advanced function.
#'
#' @details
#' Migrated from stRoke ds2dd(). Fits better with the functionality of
#' 'REDCapCAST'.
#' @param ds data set
#' @param record.id name or column number of id variable, moved to first row of
#' data dictionary, character of integer. Default is "record_id".
#' @param form.name vector of form names, character string, length 1 or length
#' equal to number of variables. Default is "basis".
#' @param field.type vector of field types, character string, length 1 or length
#' equal to number of variables. Default is "text.
#' @param field.label vector of form names, character string, length 1 or length
#' equal to number of variables. Default is NULL and is then identical to field
#' names.
#' @param include.column.names Flag to give detailed output including new
#' column names for original data set for upload.
#' @param metadata Metadata column names. Default is the included
#' names(REDCapCAST::redcapcast_meta).
#'
#' @return data.frame or list of data.frame and vector
#' @export
#'
#' @examples
#' redcapcast_data$record_id <- seq_len(nrow(redcapcast_data))
#' ds2dd(redcapcast_data, include.column.names = TRUE)
ds2dd <-
function(ds,
record.id = "record_id",
form.name = "basis",
field.type = "text",
field.label = NULL,
include.column.names = FALSE,
metadata = names(REDCapCAST::redcapcast_meta)) {
dd <- data.frame(matrix(ncol = length(metadata), nrow = ncol(ds)))
colnames(dd) <- metadata
if (is.character(record.id) && !record.id %in% colnames(ds)) {
stop("Provided record.id is not a variable name in provided data set.")
}
# renaming to lower case and substitute spaces with underscore
field.name <- gsub(" ", "_", tolower(colnames(ds)))
# handles both character and integer
colsel <-
colnames(ds) == colnames(ds[record.id])
if (summary(colsel)[3] != 1) {
stop("Provided record.id has to be or refer to a uniquely named column.")
}
dd[, "field_name"] <-
c(field.name[colsel], field.name[!colsel])
if (length(form.name) > 1 && length(form.name) != ncol(ds)) {
stop(
"Provided form.name should be of length 1 (value is reused) or equal
length as number of variables in data set."
)
}
dd[, "form_name"] <- form.name
if (length(field.type) > 1 && length(field.type) != ncol(ds)) {
stop(
"Provided field.type should be of length 1 (value is reused) or equal
length as number of variables in data set."
)
}
dd[, "field_type"] <- field.type
if (is.null(field.label)) {
dd[, "field_label"] <- dd[, "field_name"]
} else {
dd[, "field_label"] <- field.label
}
if (include.column.names) {
list("DataDictionary" = dd, "Column names" = field.name)
} else {
dd
}
}
#' Extract data from stata file for data dictionary
#'
#' @details
@ -114,6 +205,11 @@ hms2character <- function(data) {
#' @param add.auto.id flag to add id column
#' @param form.name manually specify form name(s). Vector of length 1 or
#' ncol(data). Default is NULL and "data" is used.
#' @param form.sep If supplied dataset has form names as suffix or prefix to the
#' column/variable names, the seperator can be specified. If supplied, the
#' form.name is ignored. Default is NULL.
#' @param form.prefix Flag to set if form is prefix (TRUE) or suffix (FALSE) to
#' the column names. Assumes all columns have pre- or suffix if specified.
#' @param field.type manually specify field type(s). Vector of length 1 or
#' ncol(data). Default is NULL and "text" is used for everything but factors,
#' which wil get "radio".
@ -128,78 +224,75 @@ hms2character <- function(data) {
#' or attribute `factor.labels.attr` for haven_labelled data set (imported .dta
#' file with `haven::read_dta()`).
#' @param metadata redcap metadata headings. Default is
#' REDCapCAST:::metadata_names.
#' @param validate.time Flag to validate guessed time columns
#' @param time.var.sel.pos Positive selection regex string passed to
#' `gues_time_only_filter()` as sel.pos.
#' @param time.var.sel.neg Negative selection regex string passed to
#' `gues_time_only_filter()` as sel.neg.
#' names(REDCapCAST::redcapcast_meta).
#' @param convert.logicals convert logicals to factor. Default is TRUE.
#'
#' @return list of length 2
#' @export
#'
#' @examples
#' data <- redcapcast_data
#' data |> ds2dd_detailed(validate.time = TRUE)
#' data |> ds2dd_detailed()
#' ## Basic parsing with default options
#' requireNamespace("REDCapCAST")
#' redcapcast_data |>
#' dplyr::select(-dplyr::starts_with("redcap_")) |>
#' ds2dd_detailed()
#'
#' ## Adding a record_id field
#' iris |> ds2dd_detailed(add.auto.id = TRUE)
#' mtcars |> ds2dd_detailed(add.auto.id = TRUE)
#'
#' ## Passing form name information to function
#' iris |>
#' ds2dd_detailed(
#' add.auto.id = TRUE,
#' form.name = sample(c("b", "c"), size = 6, replace = TRUE, prob = rep(.5, 2))
#' ) |>
#' purrr::pluck("meta")
#' mtcars |>
#' dplyr::mutate(unknown = NA) |>
#' numchar2fct() |>
#' ds2dd_detailed(add.auto.id = TRUE)
#'
#' ## Using column name suffix to carry form name
#' data <- iris |>
#' ds2dd_detailed(add.auto.id = TRUE) |>
#' purrr::pluck("data")
#' names(data) <- glue::glue("{sample(x = c('a','b'),size = length(names(data)),
#' replace=TRUE,prob = rep(x=.5,2))}__{names(data)}")
#' data |> ds2dd_detailed(form.sep = "__")
ds2dd_detailed <- function(data,
add.auto.id = FALSE,
date.format = "dmy",
form.name = NULL,
form.sep = NULL,
form.prefix = TRUE,
field.type = NULL,
field.label = NULL,
field.label.attr = "label",
field.validation = NULL,
metadata = metadata_names,
validate.time = FALSE,
time.var.sel.pos = "[Tt]i[d(me)]",
time.var.sel.neg = "[Dd]at[eo]") {
metadata = names(REDCapCAST::redcapcast_meta),
convert.logicals = FALSE) {
short_names <- colnames(data) |>
lapply(\(.x) cut_string_length(.x, l = 90)) |>
purrr::reduce(c)
data <- stats::setNames(data, short_names)
if (convert.logicals) {
data <- data |>
## Converts logical to factor, which overwrites attributes
dplyr::mutate(dplyr::across(dplyr::where(is.logical), as_factor))
## Problematic example:
## as.logical(sample(0:1,10,TRUE)) |> as.factor() |> as.numeric()
## Possible solution would be to subtract values by 1, so
## "0, FALSE | 1, TRUE" like native REDCap
}
## Handles the odd case of no id column present
if (add.auto.id) {
data <- dplyr::tibble(
default_trial_id = seq_len(nrow(data)),
record_id = seq_len(nrow(data)),
data
)
message("A default id column has been added")
}
if (validate.time) {
return(data |> guess_time_only_filter(validate = TRUE))
}
if (lapply(data, haven::is.labelled) |> (\(x)do.call(c, x))() |> any()) {
message("Data seems to be imported with haven from a Stata (.dta) file and
will be treated as such.")
data.source <- "dta"
} else {
data.source <- ""
}
## data classes
### Only keeps the first class, as time fields (POSIXct/POSIXt) has two
### classes
if (data.source == "dta") {
data_classes <-
data |>
haven::as_factor() |>
time_only_correction(
sel.pos = time.var.sel.pos,
sel.neg = time.var.sel.neg
) |>
lapply(\(x)class(x)[1]) |>
(\(x)do.call(c, x))()
} else {
data_classes <-
data |>
time_only_correction(
sel.pos = time.var.sel.pos,
sel.neg = time.var.sel.neg
) |>
lapply(\(x)class(x)[1]) |>
(\(x)do.call(c, x))()
}
## ---------------------------------------
@ -212,41 +305,64 @@ ds2dd_detailed <- function(data,
stats::setNames(metadata) |>
dplyr::tibble()
dd$field_name <- gsub(" ", "_", tolower(colnames(data)))
## form_name and field_name
if (!is.null(form.sep)) {
if (form.sep != "") {
parts <- strsplit(names(data), split = form.sep)
## form_name
if (is.null(form.name)) {
dd$form_name <- "data"
} else {
if (length(form.name) == 1 || length(form.name) == nrow(dd)) {
dd$form_name <- form.name
## form.sep should be unique, but handles re-occuring pattern (by only considering first or last) and form.prefix defines if form is prefix or suffix
## The other split part is used as field names
if (form.prefix) {
dd$form_name <- clean_redcap_name(Reduce(c, lapply(parts, \(.x) .x[[1]])))
dd$field_name <- Reduce(c, lapply(parts, \(.x) paste(.x[seq_len(length(.x))[-1]], collapse = form.sep)))
} else {
dd$form_name <- clean_redcap_name(Reduce(c, lapply(parts, \(.x) .x[[length(.x)]])))
dd$field_name <- Reduce(c, lapply(parts, \(.x) paste(.x[seq_len(length(.x) - 1)], collapse = form.sep)))
}
## To preserve original
colnames(data) <- dd$field_name
dd$field_name <- tolower(dd$field_name)
} else {
stop("Length of supplied 'form.name' has to be one (1) or ncol(data).")
dd$form_name <- "data"
# dd$field_name <- gsub(" ", "_", tolower(colnames(data)))
dd$field_name <- clean_redcap_name(colnames(data))
}
} else {
## if no form name prefix, the colnames are used as field_names
# dd$field_name <- gsub(" ", "_", tolower(colnames(data)))
dd$field_name <- clean_redcap_name(colnames(data))
if (is.null(form.name)) {
dd$form_name <- "data"
} else {
if (length(form.name) == 1 || length(form.name) == nrow(dd)) {
dd$form_name <- form.name
} else {
stop("Length of supplied 'form.name' has to be one (1) or ncol(data).")
}
}
}
## field_label
if (is.null(field.label)) {
if (data.source == "dta") {
label <- data |>
lapply(function(x) {
if (haven::is.labelled(x)) {
attributes(x)[[field.label.attr]]
} else {
NA
}
}) |>
(\(x)do.call(c, x))()
} else {
label <- data |> colnames()
}
dd$field_label <- data |>
sapply(function(x) {
get_attr(x, attr = field.label.attr) |>
compact_vec()
})
dd <-
dd |> dplyr::mutate(field_label = dplyr::if_else(is.na(label),
field_name, label
))
dd |>
dplyr::mutate(
field_label = dplyr::if_else(is.na(field_label),
colnames(data),
field_label
)
)
} else {
## It really should be unique for each: same length as number of variables
if (length(field.label) == 1 || length(field.label) == nrow(dd)) {
dd$field_label <- field.label
} else {
@ -254,6 +370,7 @@ ds2dd_detailed <- function(data,
}
}
data_classes <- do.call(c, lapply(data, \(.x)class(.x)[1]))
## field_type
@ -261,9 +378,14 @@ ds2dd_detailed <- function(data,
dd$field_type <- "text"
dd <-
dd |> dplyr::mutate(field_type = dplyr::if_else(data_classes == "factor",
"radio", field_type
))
dd |> dplyr::mutate(
field_type = dplyr::case_match(
data_classes,
"factor"~"radio",
"logical"~"truefalse",
.default = field_type
)
)
} else {
if (length(field.type) == 1 || length(field.type) == nrow(dd)) {
dd$field_type <- field.type
@ -273,7 +395,6 @@ ds2dd_detailed <- function(data,
}
## validation
if (is.null(field.validation)) {
dd <-
dd |> dplyr::mutate(
@ -297,41 +418,19 @@ ds2dd_detailed <- function(data,
}
}
## choices
if (data.source == "dta") {
factor_levels <- data |>
lapply(function(x) {
if (haven::is.labelled(x)) {
att <- attributes(x)$labels
paste(paste(att, names(att), sep = ", "), collapse = " | ")
} else {
NA
}
}) |>
(\(x)do.call(c, x))()
} else {
factor_levels <- data |>
lapply(function(x) {
if (is.factor(x)) {
## Re-factors to avoid confusion with missing levels
## Assumes alle relevant levels are represented in the data
re_fac <- factor(x)
paste(
paste(unique(as.numeric(re_fac)),
levels(re_fac),
sep = ", "
),
collapse = " | "
)
} else {
NA
}
}) |>
(\(x)do.call(c, x))()
}
factor_levels <- data |>
sapply(function(x) {
if (is.factor(x)) {
## Custom function to ensure factor order and keep original values
## Avoiding refactoring to keep as much information as possible
sort(named_levels(x)) |>
vec2choice()
} else {
NA
}
})
dd <-
dd |> dplyr::mutate(
@ -342,18 +441,74 @@ ds2dd_detailed <- function(data,
)
)
list(
out <- list(
data = data |>
time_only_correction(
sel.pos = time.var.sel.pos,
sel.neg = time.var.sel.neg
) |>
hms2character() |>
(\(x)stats::setNames(x, tolower(names(x))))(),
stats::setNames(dd$field_name) |>
lapply(\(.x){
if (identical("factor", class(.x))) {
as.numeric(.x)
} else {
.x
}
}) |> dplyr::bind_cols(),
meta = dd
)
class(out) <- c("REDCapCAST", class(out))
out
}
#' Check if vector is all NA
#'
#' @param data vector of data.frame
#'
#' @return logical
#' @export
#'
#' @examples
#' rep(NA, 4) |> all_na()
all_na <- function(data) {
all(is.na(data))
}
#' Guess time variables based on naming pattern
#'
#' @description
#' This is for repairing data with time variables with appended "1970-01-01"
#'
#'
#' @param data data.frame or tibble
#' @param validate.time Flag to validate guessed time columns
#' @param time.var.sel.pos Positive selection regex string passed to
#' `gues_time_only_filter()` as sel.pos.
#' @param time.var.sel.neg Negative selection regex string passed to
#' `gues_time_only_filter()` as sel.neg.
#'
#' @return data.frame or tibble
#' @export
#'
#' @examples
#' redcapcast_data |> guess_time_only(validate.time = TRUE)
guess_time_only <- function(data,
validate.time = FALSE,
time.var.sel.pos = "[Tt]i[d(me)]",
time.var.sel.neg = "[Dd]at[eo]") {
if (validate.time) {
return(data |> guess_time_only_filter(validate = TRUE))
}
### Only keeps the first class, as time fields (POSIXct/POSIXt) has two
### classes
data |> time_only_correction(
sel.pos = time.var.sel.pos,
sel.neg = time.var.sel.neg
)
}
### Completion
#' Completion marking based on completed upload
#'
@ -374,3 +529,186 @@ mark_complete <- function(upload, ls) {
) |>
stats::setNames(c(names(data)[1], paste0(forms, "_complete")))
}
#' Helper to auto-parse un-formatted data with haven and readr
#'
#' @param data data.frame or tibble
#' @param guess_type logical to guess type with readr
#' @param col_types specify col_types using readr semantics. Ignored if guess_type is TRUE
#' @param locale option to specify locale. Defaults to readr::default_locale().
#' @param ignore.vars specify column names of columns to ignore when parsing
#' @param ... ignored
#'
#' @return data.frame or tibble
#' @export
#'
#' @examples
#' mtcars |>
#' parse_data() |>
#' str()
parse_data <- function(data,
guess_type = TRUE,
col_types = NULL,
locale = readr::default_locale(),
ignore.vars = "cpr",
...) {
if (any(ignore.vars %in% names(data))) {
ignored <- data[ignore.vars]
} else {
ignored <- NULL
}
## Parses haven data by applying labels as factors in case of any
if (do.call(c, lapply(data, (\(x)inherits(x, "haven_labelled")))) |> any()) {
data <- data |>
as_factor()
}
## Applying readr cols
if (is.null(col_types) && guess_type) {
if (do.call(c, lapply(data, is.character)) |> any()) {
data <- data |> readr::type_convert(
locale = locale,
col_types = readr::cols(.default = readr::col_guess())
)
}
} else {
data <- data |> readr::type_convert(
locale = locale,
col_types = readr::cols(col_types)
)
}
if (!is.null(ignored)) {
data[ignore.vars] <- ignored
}
data
}
#' Convert vector to factor based on threshold of number of unique levels
#'
#' @description
#' This is a wrapper of forcats::as_factor, which sorts numeric vectors before
#' factoring, but levels character vectors in order of appearance.
#'
#'
#' @param data vector or data.frame column
#' @param unique.n threshold to convert class to factor
#'
#' @return vector
#' @export
#' @importFrom forcats as_factor
#'
#' @examples
#' sample(seq_len(4), 20, TRUE) |>
#' var2fct(6) |>
#' summary()
#' sample(letters, 20) |>
#' var2fct(6) |>
#' summary()
#' sample(letters[1:4], 20, TRUE) |> var2fct(6)
var2fct <- function(data, unique.n) {
if (length(unique(data)) <= unique.n) {
as_factor(data)
} else {
data
}
}
#' Applying var2fct across data set
#'
#' @description
#' Individual thresholds for character and numeric columns
#'
#' @param data dataset. data.frame or tibble
#' @param numeric.threshold threshold for var2fct for numeric columns. Default
#' is 6.
#' @param character.throshold threshold for var2fct for character columns.
#' Default is 6.
#'
#' @return data.frame or tibble
#' @export
#'
#' @examples
#' mtcars |> str()
#' \dontrun{
#' mtcars |>
#' numchar2fct(numeric.threshold = 6) |>
#' str()
#' }
numchar2fct <- function(data, numeric.threshold = 6, character.throshold = 6) {
data |>
dplyr::mutate(
dplyr::across(
dplyr::where(is.numeric),
\(.x){
var2fct(data = .x, unique.n = numeric.threshold)
}
),
dplyr::across(
dplyr::where(is.character),
\(.x){
var2fct(data = .x, unique.n = character.throshold)
}
)
)
}
#' Named vector to REDCap choices (`wrapping compact_vec()`)
#'
#' @param data named vector
#'
#' @return character string
#' @export
#'
#' @examples
#' sample(seq_len(4), 20, TRUE) |>
#' as_factor() |>
#' named_levels() |>
#' sort() |>
#' vec2choice()
vec2choice <- function(data) {
compact_vec(data, nm.sep = ", ", val.sep = " | ")
}
#' Compacting a vector of any length with or without names
#'
#' @param data vector, optionally named
#' @param nm.sep string separating name from value if any
#' @param val.sep string separating values
#'
#' @return character string
#' @export
#'
#' @examples
#' sample(seq_len(4), 20, TRUE) |>
#' as_factor() |>
#' named_levels() |>
#' sort() |>
#' compact_vec()
#' 1:6 |> compact_vec()
#' "test" |> compact_vec()
#' sample(letters[1:9], 20, TRUE) |> compact_vec()
compact_vec <- function(data, nm.sep = ": ", val.sep = "; ") {
if (all(is.na(data))) {
return(data)
}
if (length(names(data)) > 0) {
paste(
paste(data,
names(data),
sep = nm.sep
),
collapse = val.sep
)
} else {
paste(
data,
collapse = val.sep
)
}
}

View file

@ -1,15 +1,22 @@
#' Retrieve project API key if stored, if not, set and retrieve
#'
#' @description
#' Attempting to make secure API key storage so simple, that no other way makes
#' sense. Wrapping \link[keyring]{key_get} and \link[keyring]{key_set} using the
#' \link[keyring]{key_list} to check if key is in storage already.
#'
#'
#' @param key.name character vector of key name
#' @param ... passed to \link[keyring]{key_set}
#'
#' @return character vector
#' @importFrom keyring key_list key_get key_set
#' @export
get_api_key <- function(key.name) {
get_api_key <- function(key.name, ...) {
if (key.name %in% keyring::key_list()$service) {
keyring::key_get(service = key.name)
} else {
keyring::key_set(service = key.name, prompt = "Provide REDCap API key:")
keyring::key_set(service = key.name, ...)
keyring::key_get(service = key.name)
}
}
@ -18,25 +25,72 @@ get_api_key <- function(key.name) {
#' Secure API key storage and data acquisition in one
#'
#' @param project.name The name of the current project (for key storage with
#' `keyring::key_set()`, using the default keyring)
#' @param widen.data argument to widen the exported data
#' \link[keyring]{key_set}, using the default keyring)
#' @param widen.data argument to widen the exported data. [DEPRECATED], use
#' `data_format`instead
#' @param uri REDCap database API uri
#' @param ... arguments passed on to `REDCapCAST::read_redcap_tables()`
#' @param raw_or_label argument passed on to
#' \link[REDCapCAST]{read_redcap_tables}. Default is "both" to get labelled
#' data.
#' @param data_format Choose the data
#' @param ... arguments passed on to \link[REDCapCAST]{read_redcap_tables}.
#'
#' @return data.frame or list depending on widen.data
#' @export
easy_redcap <- function(project.name, widen.data = TRUE, uri, ...) {
key <- get_api_key(key.name = paste0(project.name, "_REDCAP_API"))
#'
#' @examples
#' \dontrun{
#' easy_redcap("My_new_project", fields = c("record_id", "age", "hypertension"))
#' }
easy_redcap <- function(project.name,
uri,
raw_or_label = "both",
data_format = c("wide", "list", "redcap", "long"),
widen.data = NULL,
...) {
data_format <- match.arg(data_format)
out <- read_redcap_tables(
# Interpretation of "widen.data" is kept and will override "data_format"
# for legacy sake
if (isTRUE(widen.data)) {
data_format <- "wide"
}
if (data_format %in% c("wide", "list")) {
split_action <- "all"
} else {
split_action <- "none"
}
key <- get_api_key(
key.name = paste0(project.name, "_REDCAP_API"),
prompt = "Provide REDCap API key:"
)
redcap_data <- read_redcap_tables(
uri = uri,
token = key,
raw_or_label = raw_or_label,
split_forms = split_action,
...
)
if (widen.data) {
out <- out |> redcap_wider()
# For now, long data format is just legacy REDCap
# All options are written out for future improvements
if (data_format == "wide") {
out <- redcap_data |>
redcap_wider() |>
suffix2label()
} else if (data_format == "list") {
# The read_redcap_tables() output is a list of tables (forms)
out <- redcap_data
} else if (data_format == "long") {
out <- redcap_data
} else if (data_format == "redcap") {
out <- redcap_data
}
out
}

View file

@ -0,0 +1,126 @@
#' Creates zip-file with necessary content to manually add instrument to database
#'
#' @description
#' Metadata can be added by editing the data dictionary of a project in the
#' initial design phase. If you want to later add new instruments, this
#' function can be used to create (an) instrument(s) to add to a project in
#' production.
#'
#' @param data metadata for the relevant instrument.
#' Could be from `ds2dd_detailed()`
#' @param file destination file name.
#' @param force force instrument creation and ignore different form names by
#' just using the first.
#' @param record.id record id variable name. Default is 'record_id'.
#'
#' @return exports zip-file
#' @export
#'
#' @examples
#' # iris |>
#' # ds2dd_detailed(
#' # add.auto.id = TRUE,
#' # form.name = sample(c("b", "c"), size = 6, replace = TRUE, prob = rep(.5, 2))
#' # ) |>
#' # purrr::pluck("meta") |>
#' # (\(.x){
#' # split(.x, .x$form_name)
#' # })() |>
#' # purrr::imap(function(.x, .i){
#' # export_redcap_instrument(.x,file=here::here(paste0(.i,Sys.Date(),".zip")))
#' # })
#'
#' # iris |>
#' # ds2dd_detailed(
#' # add.auto.id = TRUE
#' # ) |>
#' # purrr::pluck("meta") |>
#' # export_redcap_instrument(file=here::here(paste0("instrument",Sys.Date(),".zip")))
export_redcap_instrument <- function(data,
file,
force = FALSE,
record.id = "record_id") {
# Ensure form name is the same
if (force) {
data$form_name <- data$form_name[1]
} else if (length(unique(data$form_name)) != 1) {
stop("Please provide metadata for a single form only. See examples for
ideas on exporting multiple instruments.")
}
if (!is.na(record.id) && record.id %in% data[["field_name"]]) {
data <- data[-match(record.id, data[["field_name"]]), ]
}
temp_dir <- tempdir()
utils::write.csv(data, paste0(temp_dir, "/instrument.csv"), row.names = FALSE, na = "")
writeLines("REDCapCAST", paste0(temp_dir, "/origin.txt"))
zip::zip(
zipfile = file,
files = c("origin.txt", "instrument.csv"),
root = temp_dir
)
}
#' DEPRICATED Create zips file with necessary content based on data set
#'
#' @description
#' Metadata can be added by editing the data dictionary of a project in the
#' initial design phase. If you want to later add new instruments, this
#' function can be used to create (an) instrument(s) to add to a project in
#' production.
#'
#' @param data metadata for the relevant instrument.
#' Could be from `ds2dd_detailed()`
#' @param dir destination dir for the instrument zip. Default is the current WD.
#' @param record.id flag to omit the first row of the data dictionary assuming
#' this is the record_id field which should not be included in the instrument.
#' Default is TRUE.
#'
#' @return list
#' @export
#'
#' @examples
#' \dontrun{
#' data <- iris |>
#' ds2dd_detailed(
#' add.auto.id = TRUE,
#' form.name = sample(c("b", "c"),
#' size = 6,
#' replace = TRUE, prob = rep(.5, 2)
#' )
#' ) |>
#' purrr::pluck("meta")
#' # data |> create_instrument_meta()
#'
#' data <- iris |>
#' ds2dd_detailed(add.auto.id = FALSE) |>
#' purrr::pluck("data")
#' iris |>
#' setNames(glue::glue("{sample(x = c('a','b'),size = length(ncol(iris)),
#' replace=TRUE,prob = rep(x=.5,2))}__{names(iris)}")) |>
#' ds2dd_detailed(form.sep = "__")
#' data |>
#' purrr::pluck("meta") |>
#' create_instrument_meta(record.id = FALSE)
#' }
create_instrument_meta <- function(data,
dir = here::here(""),
record.id = TRUE) {
# browser()
if (record.id) {
data <- data[-1, ]
}
temp_dir <- tempdir()
split(data, data$form_name) |> purrr::imap(function(.x, .i) {
utils::write.csv(.x, paste0(temp_dir, "/instrument.csv"),
row.names = FALSE, na = ""
)
writeLines("REDCapCAST", paste0(temp_dir, "/origin.txt"))
zip::zip(paste0(dir, "/", .i, Sys.Date(), ".zip"),
files = c("origin.txt", "instrument.csv"),
root = temp_dir
)
})
}

45
R/fct_drop.R Normal file
View file

@ -0,0 +1,45 @@
#' Drop unused levels preserving label data
#'
#' This extends [forcats::fct_drop()] to natively work across a data.frame and
#' replaces [base::droplevels()].
#'
#' @param x Factor to drop unused levels
#' @param ... Other arguments passed down to method.
#' @export
#'
#' @importFrom forcats fct_drop
#' @export
#' @name fct_drop
fct_drop <- function(x, ...) {
UseMethod("fct_drop")
}
#' @rdname fct_drop
#' @export
#'
#' @examples
#' mtcars |>
#' numchar2fct() |>
#' fct_drop()
fct_drop.data.frame <- function(x, ...) {
purrr::map(x, \(.x){
if (is.factor(.x)) {
forcats::fct_drop(.x)
} else {
.x
}
}) |>
dplyr::bind_cols()
}
#' @rdname fct_drop
#' @export
#'
#' @examples
#' mtcars |>
#' numchar2fct() |>
#' dplyr::mutate(vs = fct_drop(vs))
fct_drop.factor <- function(x, ...) {
forcats::fct_drop(f = x, ...)
}

68
R/html_styling.R Normal file
View file

@ -0,0 +1,68 @@
#' Create two-column HTML table for data piping in REDCap instruments
#'
#' @param text descriptive text
#' @param variable variable to pipe
#'
#' @return character vector
#' @export
#'
#' @examples
#' create_html_table(text = "Patient ID", variable = c("[cpr]"))
#' create_html_table(text = paste("assessor", 1:2, sep = "_"), variable = c("[cpr]"))
#' # create_html_table(text = c("CPR nummer","Word"), variable = c("[cpr][1]", "[cpr][2]", "[test]"))
create_html_table <- function(text, variable) {
assertthat::assert_that(length(text)>1 & length(variable)==1 |
length(text)==1 & length(variable)>1 |
length(text)==length(variable),
msg = "text and variable has to have same length, or one has to have length 1")
start <- '<table style="border-collapse: collapse; width: 100%;" border="0"> <tbody>'
end <- "</tbody> </table>"
# Extension would allow defining number of columns and specify styling
items <- purrr::map2(text, variable, function(.x, .y) {
glue::glue('<tr> <td style="width: 58%;"> <h5><span style="font-weight: normal;">{.x}<br /></span></h5> </td> <td style="width: 42%; text-align: left;"> <h5><span style="font-weight: bold;">{.y}</span></h5> </td> </tr>')
})
glue::glue(start, glue::glue_collapse(purrr::list_c(items)), end)
}
#' Simple html tag wrapping for REDCap text formatting
#'
#' @param data character vector
#' @param tag character vector length 1
#' @param extra character vector
#'
#' @return character vector
#' @export
#'
#' @examples
#' html_tag_wrap("Titel", tag = "div", extra = 'class="rich-text-field-label"')
#' html_tag_wrap("Titel", tag = "h2")
html_tag_wrap <- function(data, tag = "h2", extra = NULL) {
et <- ifelse(is.null(extra), "", paste0(" ", extra))
glue::glue("<{tag}{et}>{data}</{tag}>")
}
#' Sub-header formatting wrapper
#'
#' @param data character vector
#' @param tag character vector length 1
#'
#' @return character vector
#' @export
#'
#' @examples
#' "Instrument header" |> format_subheader()
format_subheader <- function(data, tag = "h2") {
dplyr::if_else(is.na(data) | data == "",
NA,
data |>
html_tag_wrap(tag = tag) |>
html_tag_wrap(
tag = "div",
extra = 'class="rich-text-field-label"'
)
)
}

View file

@ -1,7 +1,20 @@
#' User input processing
#'
#' @param x input
#'
#' @return processed input
#' @export
process_user_input <- function(x) {
UseMethod("process_user_input", x)
}
#' User input processing default
#'
#' @param x input
#' @param ... ignored
#'
#' @return processed input
#' @export
process_user_input.default <- function(x, ...) {
stop(
deparse(substitute(x)),
@ -12,10 +25,25 @@ process_user_input.default <- function(x, ...) {
)
}
#' User input processing data.frame
#'
#' @param x input
#' @param ... ignored
#'
#' @return processed input
#' @export
process_user_input.data.frame <- function(x, ...) {
x
}
#' User input processing character
#'
#' @param x input
#' @param ... ignored
#'
#' @return processed input
#' @export
process_user_input.character <- function(x, ...) {
if (!requireNamespace("jsonlite", quietly = TRUE)) {
stop(
@ -32,6 +60,14 @@ process_user_input.character <- function(x, ...) {
jsonlite::fromJSON(x)
}
#' User input processing response
#'
#' @param x input
#' @param ... ignored
#'
#' @return processed input
#' @export
process_user_input.response <- function(x, ...) {
process_user_input(rawToChar(x$content))
}

View file

@ -1,19 +1,33 @@
#' Download REDCap data
#'
#' Implementation of REDCap_split with a focused data acquisition approach using
#' REDCapR::redcap_read and only downloading specified fields, forms and/or
#' events using the built-in focused_metadata including some clean-up.
#' @description
#' Implementation of passed on to \link[REDCapCAST]{REDCap_split} with a focused
#' data acquisition approach using passed on to \link[REDCapR]{redcap_read} and
#' only downloading specified fields, forms and/or events using the built-in
#' focused_metadata including some clean-up.
#' Works with classical and longitudinal projects with or without repeating
#' instruments.
#' Will preserve metadata in the data.frames as labels.
#'
#' @param uri REDCap database API uri
#' @param token API token
#' @param records records to download
#' @param fields fields to download
#' @param events events to download
#' @param forms forms to download
#' @param raw_or_label raw or label tags
#' @param raw_or_label raw or label tags. Can be "raw", "label" or "both".
#'
#' * "raw": Standard \link[REDCapR]{redcap_read} method to get raw values.
#' * "label": Standard \link[REDCapR]{redcap_read} method to get label values.
#' * "both": Get raw values with REDCap labels applied as labels. Use
#' \link[REDCapCAST]{as_factor} to format factors with original labels and use
#' the `gtsummary` package functions like \link[gtsummary]{tbl_summary} to
#' easily get beautiful tables with original labels from REDCap. Use
#' \link[REDCapCAST]{fct_drop} to drop empty levels.
#'
#' @param split_forms Whether to split "repeating" or "all" forms, default is
#' all.
#' all. Give "none" to export native semi-long REDCap format
#' @param ... passed on to \link[REDCapR]{redcap_read}
#'
#' @return list of instruments
#' @importFrom REDCapR redcap_metadata_read redcap_read redcap_event_instruments
@ -28,18 +42,24 @@ read_redcap_tables <- function(uri,
fields = NULL,
events = NULL,
forms = NULL,
raw_or_label = "label",
split_forms = "all") {
raw_or_label = c("raw", "label", "both"),
split_forms = c("all", "repeating", "none"),
...) {
raw_or_label <- match.arg(raw_or_label, c("raw", "label", "both"))
split_forms <- match.arg(split_forms)
# Getting metadata
m <-
REDCapR::redcap_metadata_read(redcap_uri = uri, token = token)[["data"]]
if (!is.null(fields)) {
fields_test <- fields %in% unique(m$field_name)
fields_test <- fields %in% c(m$field_name, paste0(unique(m$form_name), "_complete"))
if (any(!fields_test)) {
print(paste0("The following field names are invalid: ",
paste(fields[!fields_test], collapse = ", "), "."))
print(paste0(
"The following field names are invalid: ",
paste(fields[!fields_test], collapse = ", "), "."
))
stop("Not all supplied field names are valid")
}
}
@ -49,8 +69,10 @@ read_redcap_tables <- function(uri,
forms_test <- forms %in% unique(m$form_name)
if (any(!forms_test)) {
print(paste0("The following form names are invalid: ",
paste(forms[!forms_test], collapse = ", "), "."))
print(paste0(
"The following form names are invalid: ",
paste(forms[!forms_test], collapse = ", "), "."
))
stop("Not all supplied form names are valid")
}
}
@ -64,12 +86,20 @@ read_redcap_tables <- function(uri,
event_test <- events %in% unique(arm_event_inst$data$unique_event_name)
if (any(!event_test)) {
print(paste0("The following event names are invalid: ",
paste(events[!event_test], collapse = ", "), "."))
print(paste0(
"The following event names are invalid: ",
paste(events[!event_test], collapse = ", "), "."
))
stop("Not all supplied event names are valid")
}
}
if (raw_or_label == "both") {
rorl <- "raw"
} else {
rorl <- raw_or_label
}
# Getting dataset
d <- REDCapR::redcap_read(
redcap_uri = uri,
@ -78,9 +108,17 @@ read_redcap_tables <- function(uri,
events = events,
forms = forms,
records = records,
raw_or_label = raw_or_label
raw_or_label = rorl,
...
)[["data"]]
if (raw_or_label == "both") {
d <- apply_field_label(data = d, meta = m)
d <- apply_factor_labels(data = d, meta = m)
}
# Process repeat instrument naming
# Removes any extra characters other than a-z, 0-9 and "_", to mimic raw
# instrument names.
@ -91,13 +129,115 @@ read_redcap_tables <- function(uri,
# Processing metadata to reflect focused dataset
m <- focused_metadata(m, names(d))
# Splitting
out <- REDCap_split(d,
m,
forms = split_forms,
primary_table_name = ""
)
sanitize_split(out)
if (split_forms != "none") {
REDCap_split(d,
m,
forms = split_forms,
primary_table_name = ""
) |> sanitize_split()
} else {
d
}
}
#' Very simple function to remove rich text formatting from field label
#' and save the first paragraph ('<p>...</p>').
#'
#' @param data field label
#'
#' @return character vector
#' @export
#'
#' @examples
#' clean_field_label("<div class=\"rich-text-field-label\"><p>Fazekas score</p></div>")
clean_field_label <- function(data) {
out <- data |>
lapply(\(.x){
unlist(strsplit(.x, "</"))[1]
}) |>
lapply(\(.x){
splt <- unlist(strsplit(.x, ">"))
splt[length(splt)]
})
Reduce(c, out)
}
#' Converts REDCap choices to factor levels and stores in labels attribute
#'
#' @description
#' Applying \link[REDCapCAST]{as_factor} to the data.frame or variable, will
#' coerce to a factor.
#'
#' @param data vector
#' @param meta vector of REDCap choices
#'
#' @return vector of class "labelled" with a "labels" attribute
#' @export
#'
#' @examples
#' format_redcap_factor(sample(1:3, 20, TRUE), "1, First. | 2, second | 3, THIRD")
format_redcap_factor <- function(data, meta) {
lvls <- strsplit(meta, " | ", fixed = TRUE) |>
unlist() |>
lapply(\(.x){
splt <- unlist(strsplit(.x, ", "))
stats::setNames(splt[1], nm = paste(splt[-1], collapse = ", "))
}) |>
(\(.x){
Reduce(c, .x)
})()
set_attr(data, label = lvls, attr = "labels") |>
set_attr(data, label = "labelled", attr = "class")
}
#' Apply REDCap filed labels to data frame
#'
#' @param data REDCap exported data set
#' @param meta REDCap data dictionary
#'
#' @return data.frame
#' @export
#'
apply_field_label <- function(data, meta) {
purrr::imap(data, \(.x, .i){
if (.i %in% meta$field_name) {
# Does not handle checkboxes
out <- set_attr(.x,
label = clean_field_label(meta$field_label[meta$field_name == .i]),
attr = "label"
)
out
} else {
.x
}
}) |> dplyr::bind_cols()
}
#' Preserve all factor levels from REDCap data dictionary in data export
#'
#' @param data REDCap exported data set
#' @param meta REDCap data dictionary
#'
#' @return data.frame
#' @export
#'
apply_factor_labels <- function(data, meta = NULL) {
if (is.list(data) && !is.data.frame(data)) {
meta <- data$meta
data <- data$data
} else if (is.null(meta)) {
stop("Please provide a data frame for meta")
}
purrr::imap(data, \(.x, .i){
if (any(c("radio", "dropdown") %in% meta$field_type[meta$field_name == .i]) || is.factor(.x)) {
format_redcap_factor(.x, meta$select_choices_or_calculations[meta$field_name == .i])
} else {
.x
}
}) |> dplyr::bind_cols()
}

View file

@ -4,14 +4,20 @@ utils::globalVariables(c(
"inst.glue"
))
#' @title Redcap Wider
#' @description Converts a list of REDCap data frames from long to wide format.
#' Handles longitudinal projects, but not yet repeated instruments.
#' @param data A list of data frames.
#' @param event.glue A dplyr::glue string for repeated events naming
#' @param inst.glue A dplyr::glue string for repeated instruments naming
#' @return The list of data frames in wide format.
#' Transforms list of REDCap data.frames to a single wide data.frame
#'
#' @description Converts a list of REDCap data.frames from long to wide format.
#' In essence it is a wrapper for the \link[tidyr]{pivot_wider} function applied
#' on a REDCap output (from \link[REDCapCAST]{read_redcap_tables}) or manually
#' split by \link[REDCapCAST]{REDCap_split}.
#'
#' @param data A list of data frames
#' @param event.glue A \link[glue]{glue} string for repeated events naming
#' @param inst.glue A \link[glue]{glue} string for repeated instruments naming
#'
#' @return data.frame in wide format
#' @export
#'
#' @importFrom tidyr pivot_wider
#' @importFrom tidyselect all_of
#' @importFrom purrr reduce
@ -73,10 +79,35 @@ utils::globalVariables(c(
#' )
#' )
#' redcap_wider(list4)
#'
#' list5 <- list(
#' data.frame(
#' record_id = c(1, 2, 1, 2),
#' redcap_event_name = c("baseline", "baseline", "followup", "followup")
#' ),
#' data.frame(
#' record_id = c(1, 1, 1, 1, 2, 2, 2, 2),
#' redcap_event_name = c(
#' "baseline", "baseline", "followup", "followup",
#' "baseline", "baseline", "followup", "followup"
#' ),
#' redcap_repeat_instrument = "walk",
#' redcap_repeat_instance = c(1, 2, 1, 2, 1, 2, 1, 2),
#' dist = c(40, 32, 25, 33, 28, 24, 23, 36)
#' ),
#' data.frame(
#' record_id = c(1, 2),
#' redcap_event_name = c("baseline", "baseline"),
#' gender = c("male", "female")
#' )
#' )
#' redcap_wider(list5)
redcap_wider <-
function(data,
event.glue = "{.value}_{redcap_event_name}",
inst.glue = "{.value}_{redcap_repeat_instance}") {
event.glue = "{.value}____{redcap_event_name}",
inst.glue = "{.value}____{redcap_repeat_instance}") {
if (!is_repeated_longitudinal(data)) {
if (is.list(data)) {
if (length(data) == 1) {
@ -88,7 +119,28 @@ redcap_wider <-
out <- data
}
} else {
id.name <- do.call(c, lapply(data, names))[[1]]
## Cleaning instrument list to only include instruments holding other data
## than ID and generic columns
## This is to mitigate an issue when not exporting fields from the first
## instrument.
## Not taking this step would throw an error when pivoting.
instrument_names <- lapply(data, names)
id.name <- do.call(c, instrument_names)[[1]]
generic_names <- c(
id.name,
"redcap_event_name",
"redcap_repeat_instrument",
"redcap_repeat_instance"
)
semi_empty <- lapply(instrument_names,\(.x){
all(.x %in% generic_names)
}) |> unlist()
data <- data[!semi_empty]
l <- lapply(data, function(i) {
rep_inst <- "redcap_repeat_instrument" %in% names(i)
@ -97,12 +149,7 @@ redcap_wider <-
k <- lapply(split(i, f = i[[id.name]]), function(j) {
cname <- colnames(j)
vals <-
cname[!cname %in% c(
id.name,
"redcap_event_name",
"redcap_repeat_instrument",
"redcap_repeat_instance"
)]
cname[!cname %in% generic_names]
s <- tidyr::pivot_wider(
j,
names_from = "redcap_repeat_instance",
@ -111,7 +158,15 @@ redcap_wider <-
)
s[!colnames(s) %in% c("redcap_repeat_instrument")]
})
i <- Reduce(dplyr::bind_rows, k)
# Labels are removed and restored after bind_rows as class "labelled"
# is not supported
i <- remove_labelled(k) |>
dplyr::bind_rows()
all_labels <- save_labels(data)
i <- restore_labels(i, all_labels)
}
event <- "redcap_event_name" %in% names(i)
@ -141,8 +196,82 @@ redcap_wider <-
}
})
out <- data.frame(Reduce(f = dplyr::full_join, x = l))
# out <- Reduce(f = dplyr::full_join, x = l)
out <- purrr::reduce(.x = l, .f = dplyr::full_join)
}
out
}
# Applies list of attributes to data.frame
restore_labels <- function(data, labels) {
stopifnot(is.list(labels))
stopifnot(is.data.frame(data))
for (ndx in names(labels)) {
data <- purrr::imap(data, \(.y, .j){
if (startsWith(.j, ndx)) {
set_attr(.y, labels[[ndx]])
} else {
.y
}
}) |> dplyr::bind_cols()
}
return(data)
}
# Extract unique variable attributes from list of data.frames
save_labels <- function(data) {
stopifnot(is.list(data))
out <- list()
for (j in seq_along(data)) {
out <- c(out, lapply(data[[j]], get_attr))
}
out[!duplicated(names(out))]
}
# Removes class attributes of class "labelled" or "haven_labelled"
remove_labelled <- function(data) {
stopifnot(is.list(data))
lapply(data, \(.x) {
lapply(.x, \(.y) {
if (REDCapCAST::is.labelled(.y)) {
set_attr(.y, label = NULL, attr = "class")
} else {
.y
}
}) |>
dplyr::bind_cols()
})
}
#' Transfer variable name suffix to label in widened data
#'
#' @param data data.frame
#' @param suffix.sep string to split suffix(es). Passed to \link[base]{strsplit}
#' @param attr label attribute. Default is "label"
#' @param glue.str glue string for new label. Available variables are "label"
#' and "suffixes"
#'
#' @return data.frame
#' @export
#'
suffix2label <- function(data,
suffix.sep = "____",
attr = "label",
glue.str="{label} ({paste(suffixes,collapse=', ')})") {
data |>
purrr::imap(\(.d, .i){
suffixes <- unlist(strsplit(.i, suffix.sep))[-1]
if (length(suffixes) > 0) {
label <- get_attr(.d, attr = attr)
set_attr(.d,
glue::glue(glue.str),
attr = attr
)
} else {
.d
}
}) |>
dplyr::bind_cols()
}

View file

@ -17,6 +17,9 @@
#' \item{age_integer}{Age integer, numeric}
#' \item{sex}{Legal sex, character}
#' \item{cohabitation}{Cohabitation status, character}
#' \item{con_calc}{con_calc}
#' \item{con_mrs}{con_mrs}
#' \item{consensus_complete}{consensus_complete}
#' \item{hypertension}{Hypertension, character}
#' \item{diabetes}{diabetes, character}
#' \item{region}{region, character}

View file

@ -1,6 +1,6 @@
#' REDCap metadata from data base
#'
#' This metadata dataset from a REDCap database is for demonstrational purposes.
#' This metadata dataset from a REDCap database is for demonstration purposes.
#'
#' @format A data frame with 22 variables:
#' \describe{

View file

@ -1,58 +1,298 @@
utils::globalVariables(c("server"))
#' Shiny server factory
#'
#' @return shiny server
#' @export
server_factory <- function() {
source(here::here("app/server.R"))
server
}
#' UI factory for shiny app
#'
#' @return shiny ui
#' @export
ui_factory <- function() {
# require(ggplot2)
source(here::here("app/ui.R"))
}
#' Launch the included Shiny-app for database casting and upload
#'
#' @description
#' Wraps shiny::runApp()
#'
#' @param ... Arguments passed to shiny::runApp()
#'
#' @return shiny app
#' @export
#'
#' @examples
#' # shiny_cast()
#'
shiny_cast <- function() {
# shiny::runApp(appDir = here::here("app/"), launch.browser = TRUE)
shiny_cast <- function(...) {
appDir <- system.file("shiny-examples", "casting", package = "REDCapCAST")
if (appDir == "") {
stop("Could not find example directory. Try re-installing `REDCapCAST`.", call. = FALSE)
}
shiny::shinyApp(
ui_factory(),
server_factory()
)
shiny::runApp(appDir = appDir, ...)
}
#' Deploy the Shiny app with rsconnect
#' DEPRECATED Helper to import files correctly
#'
#' @param path app folder path
#' @param name.app name of deployed app
#' @param filenames file names
#'
#' @return deploy
#' @return character vector
#' @export
#'
#' @examples
#' # deploy_shiny
#' file_extension(list.files(here::here(""))[[2]])[[1]]
#' file_extension(c("file.cd..ks", "file"))
file_extension <- function(filenames) {
sub(
pattern = "^(.*\\.|[^.]+)(?=[^.]*)", replacement = "",
filenames,
perl = TRUE
)
}
#' Flexible file import based on extension
#'
deploy_shiny <- function(path = here::here("app/"), name.app = "shiny_cast") {
# Connecting
rsconnect::setAccountInfo(
name = "cognitiveindex",
token = keyring::key_get(service = "rsconnect_cognitiveindex_token"),
secret = keyring::key_get(service = "rsconnect_cognitiveindex_secret")
#' @param file file name
#' @param consider.na character vector of strings to consider as NAs
#'
#' @return tibble
#' @export
#'
#' @importFrom openxlsx2 read_xlsx
#' @importFrom haven read_dta
#' @importFrom readODS read_ods
#' @importFrom readr read_csv read_rds
#'
#'
#' @examples
#' read_input("https://raw.githubusercontent.com/agdamsbo/cognitive.index.lookup/main/data/sample.csv")
read_input <- function(file, consider.na = c("NA", '""', "")) {
ext <- tolower(tools::file_ext(file))
tryCatch(
{
if (ext == "csv") {
df <- read_csv(file = file, na = consider.na)
} else if (ext %in% c("xls", "xlsx")) {
df <- read_xlsx(file = file, na.strings = consider.na)
} else if (ext == "dta") {
df <- read_dta(file = file)
} else if (ext == "ods") {
df <- read_ods(path = file)
} else if (ext == "rds") {
df <- read_rds(file = file)
}else {
stop("Input file format has to be on of:
'.csv', '.xls', '.xlsx', '.dta', '.ods' or '.rds'")
}
},
error = function(e) {
# return a safeError if a parsing error occurs
stop(shiny::safeError(e))
}
)
# Deploying
rsconnect::deployApp(appDir = path, lint = TRUE, appName = name.app, )
df
}
#' Overview of REDCapCAST data for shiny
#'
#' @param data list with class 'REDCapCAST'
#'
#' @return gt object
#' @export
cast_data_overview <- function(data){
stopifnot("REDCapCAST" %in% class(data))
data |>
purrr::pluck("data") |>
utils::head(20) |>
# dplyr::tibble() |>
gt::gt() |>
gt::tab_style(
style = gt::cell_text(weight = "bold"),
locations = gt::cells_column_labels(dplyr::everything())
) |>
gt::tab_header(
title = "Imported data preview",
subtitle = "The first 20 subjects of the supplied dataset for reference."
)
}
#' Overview of REDCapCAST meta data for shiny
#'
#' @param data list with class 'REDCapCAST'
#'
#' @return gt object
#' @export
cast_meta_overview <- function(data){
stopifnot("REDCapCAST" %in% class(data))
data |>
purrr::pluck("meta") |>
# dplyr::tibble() |>
dplyr::mutate(
dplyr::across(
dplyr::everything(),
\(.x) {
.x[is.na(.x)] <- ""
return(.x)
}
)
) |>
dplyr::select(1:8) |>
gt::gt() |>
gt::tab_style(
style = gt::cell_text(weight = "bold"),
locations = gt::cells_column_labels(dplyr::everything())
) |>
gt::tab_header(
title = "Generated metadata",
subtitle = "Only the first 8 columns are modified using REDCapCAST. Download the metadata to see everything."
) |>
gt::tab_style(
style = gt::cell_borders(
sides = c("left", "right"),
color = "grey80",
weight = gt::px(1)
),
locations = gt::cells_body(
columns = dplyr::everything()
)
)
}
#' Nav_bar defining function for shiny ui
#'
#' @return shiny object
#' @export
#'
nav_bar_page <- function(){
bslib::page_navbar(
title = "Easy REDCap database creation",
sidebar = bslib::sidebar(
width = 300,
shiny::h5("Metadata casting"),
shiny::fileInput(
inputId = "ds",
label = "Upload spreadsheet",
multiple = FALSE,
accept = c(
".csv",
".xls",
".xlsx",
".dta",
".rds",
".ods"
)
),
# shiny::actionButton(
# inputId = "load_data",
# label = "Load data",
# icon = shiny::icon("circle-down")
# ),
shiny::helpText("Have a look at the preview panels to validate the data dictionary and imported data."),
# For some odd reason this only unfolds when the preview panel is shown..
# This has been solved by adding an arbitrary button to load data - which was abandoned again
shiny::conditionalPanel(
condition = "output.uploaded=='yes'",
shiny::radioButtons(
inputId = "add_id",
label = "Add ID, or use first column?",
selected = "no",
inline = TRUE,
choices = list(
"First column" = "no",
"Add ID" = "yes",
"No ID" = "none"
)
),
shiny::radioButtons(
inputId = "specify_factors",
label = "Specify categorical variables?",
selected = "no",
inline = TRUE,
choices = list(
"No" = "no",
"Yes" = "yes"
)
),
shiny::conditionalPanel(
condition = "input.specify_factors=='yes'",
shiny::uiOutput("factor_vars")
),
# condition = "input.load_data",
# shiny::helpText("Below you can download the dataset formatted for upload and the
# corresponding data dictionary for a new data base, if you want to upload manually."),
# Button
shiny::downloadButton(outputId = "downloadData", label = "Download renamed data"),
# Button
shiny::downloadButton(outputId = "downloadMeta", label = "Download data dictionary"),
# Button
shiny::downloadButton(outputId = "downloadInstrument", label = "Download as instrument"),
# Horizontal line ----
shiny::tags$hr(),
shiny::radioButtons(
inputId = "upload_redcap",
label = "Upload directly to REDCap server?",
selected = "no",
inline = TRUE,
choices = list(
"No" = "no",
"Yes" = "yes"
)
),
shiny::conditionalPanel(
condition = "input.upload_redcap=='yes'",
shiny::h4("2) Data base upload"),
shiny::helpText("This tool is usable for now. Detailed instructions are coming."),
shiny::textInput(
inputId = "uri",
label = "URI",
value = "https://redcap.your.institution/api/"
),
shiny::textInput(
inputId = "api",
label = "API key",
value = ""
),
shiny::helpText("An API key is an access key to the REDCap database. Please", shiny::a("see here for directions", href = "https://www.iths.org/news/redcap-tip/redcap-api-101/"), " to obtain an API key for your project."),
shiny::actionButton(
inputId = "upload.meta",
label = "Upload datadictionary", icon = shiny::icon("book-bookmark")
),
shiny::helpText("Please note, that before uploading any real data, put your project
into production mode."),
shiny::actionButton(
inputId = "upload.data",
label = "Upload data", icon = shiny::icon("upload")
)
)
),
shiny::br(),
shiny::br(),
shiny::br(),
shiny::p(
"License: ", shiny::a("GPL-3+", href = "https://agdamsbo.github.io/REDCapCAST/LICENSE.html")
),
shiny::p(
shiny::a("Package documentation", href = "https://agdamsbo.github.io/REDCapCAST")
)
),
bslib::nav_panel(
title = "Intro",
shiny::markdown(readLines("www/SHINYCAST.md")),
shiny::br()
),
# bslib::nav_spacer(),
bslib::nav_panel(
title = "Data preview",
gt::gt_output(outputId = "data.tbl")
# shiny::htmlOutput(outputId = "data.tbl", container = shiny::span)
),
bslib::nav_panel(
title = "Dictionary overview",
gt::gt_output(outputId = "meta.tbl")
# shiny::htmlOutput(outputId = "meta.tbl", container = shiny::span)
),
bslib::nav_panel(
title = "Upload",
shiny::h3("Meta upload overview"),
shiny::textOutput(outputId = "upload.meta.print"),
shiny::h3("Data upload overview"),
shiny::textOutput(outputId = "upload.data.print")
)
)
}

Binary file not shown.

100
R/utils.r
View file

@ -97,7 +97,10 @@ focused_metadata <- function(metadata, vars_in_data) {
#' @return vector or data frame, same format as input
#' @export
#'
#' @examples
#' "Research!, ne:ws? and c;l-.ls" |> clean_redcap_name()
clean_redcap_name <- function(x) {
gsub("[,.;:?!@]","",
gsub(
" ", "_",
gsub(
@ -108,14 +111,19 @@ clean_redcap_name <- function(x) {
)
)
)
)
}
#' Sanitize list of data frames
#'
#' Removing empty rows
#'
#' @param l A list of data frames.
#' @param generic.names A vector of generic names to be excluded.
#' @param drop.complete logical to remove generic REDCap variables indicating
#' instrument completion. Default is TRUE.
#' @param drop.empty logical to remove variables with only NAs Default is TRUE.
#'
#' @return A list of data frames with generic names excluded.
#'
@ -127,21 +135,34 @@ sanitize_split <- function(l,
"redcap_event_name",
"redcap_repeat_instrument",
"redcap_repeat_instance"
)) {
),
drop.complete=TRUE,
drop.empty=TRUE) {
generic.names <- c(
get_id_name(l),
generic.names,
paste0(names(l), "_complete")
generic.names
)
lapply(l, function(i) {
if (drop.complete){
generic.names <- c(
generic.names,
paste0(names(l), "_complete")
)
}
out <- lapply(l, function(i) {
if (ncol(i) > 2) {
s <- data.frame(i[, !colnames(i) %in% generic.names])
s <- i[!colnames(i) %in% generic.names]
if (drop.empty){
i[!apply(is.na(s), MARGIN = 1, FUN = all), ]
}
} else {
i
}
})
# On removing empty variables, a list may end up empty
out[sapply(out,nrow)>0]
}
@ -496,52 +517,27 @@ is_repeated_longitudinal <- function(data, generics = c(
}
#' Helper to import files correctly
#'
#' @param filenames file names
#'
#' @return character vector
#' @export
#'
#' @examples
#' file_extension(list.files(here::here(""))[[2]])[[1]]
file_extension <- function(filenames) {
sub(pattern = "^(.*\\.|[^.]+)(?=[^.]*)", replacement = "",
filenames,
perl = TRUE)
}
#' Flexible file import based on extension
#'
#' @param file file name
#' @param consider.na character vector of strings to consider as NAs
#'
#' @return tibble
#' @export
#'
#' @examples
#' read_input("https://raw.githubusercontent.com/agdamsbo/cognitive.index.lookup/main/data/sample.csv")
read_input <- function(file, consider.na = c("NA", '""', "")) {
ext <- file_extension(file)
tryCatch(
{
if (ext == "csv") {
df <- readr::read_csv(file = file, na = consider.na)
} else if (ext %in% c("xls", "xlsx")) {
df <- openxlsx2::read_xlsx(file = file, na.strings = consider.na)
} else if (ext == "dta") {
df <- haven::read_dta(file = file)
} else {
stop("Input file format has to be either '.csv', '.xls' or '.xlsx'")
}
},
error = function(e) {
# return a safeError if a parsing error occurs
stop(shiny::safeError(e))
}
dummy_fun <- function(...){
list(
gtsummary::add_difference()
)
df
}
#' Cut string to desired length
#'
#' @param data data
#' @param l length
#'
#' @returns character string of length l
#' @export
#'
#' @examples
#' "length" |> cut_string_length(l=3)
cut_string_length <- function(data,l=100){
if (nchar(data)>=l){
substr(data,1,l)
} else {
data
}
}

View file

@ -1,57 +1,55 @@
<!-- badges: start -->
[![GitHub R package version](https://img.shields.io/github/r-package/v/agdamsbo/REDCapCAST)](https://github.com/agdamsbo/REDCapCAST)
[![CRAN/METACRAN](https://img.shields.io/cran/v/REDCapCAST)](https://CRAN.R-project.org/package=REDCapCAST)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8013984.svg)](https://doi.org/10.5281/zenodo.8013984)
[![R-CMD-check](https://github.com/agdamsbo/REDCapCAST/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/agdamsbo/REDCapCAST/actions/workflows/R-CMD-check.yaml)
[![Page deployed](https://github.com/agdamsbo/REDCapCAST/actions/workflows/pages/pages-build-deployment/badge.svg)](https://github.com/agdamsbo/REDCapCAST/actions/workflows/pages/pages-build-deployment)
[![Codecov test coverage](https://codecov.io/gh/agdamsbo/REDCapCAST/branch/master/graph/badge.svg)](https://app.codecov.io/gh/agdamsbo/REDCapCAST?branch=master)
[![CRAN downloads](https://cranlogs.r-pkg.org/badges/grand-total/REDCapCAST)](https://cran.r-project.org/package=REDCapCAST)
[![Lifecycle:
experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html)
[![GitHub R package version](https://img.shields.io/github/r-package/v/agdamsbo/REDCapCAST)](https://github.com/agdamsbo/REDCapCAST) [![CRAN/METACRAN](https://img.shields.io/cran/v/REDCapCAST)](https://CRAN.R-project.org/package=REDCapCAST) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.8013984.svg)](https://doi.org/10.5281/zenodo.8013984) [![R-hub](https://github.com/agdamsbo/REDCapCAST/actions/workflows/rhub.yaml/badge.svg)](https://github.com/agdamsbo/REDCapCAST/actions/workflows/rhub.yaml) [![R-CMD-check](https://github.com/agdamsbo/REDCapCAST/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/agdamsbo/REDCapCAST/actions/workflows/R-CMD-check.yaml) [![Page deployed](https://github.com/agdamsbo/REDCapCAST/actions/workflows/pages/pages-build-deployment/badge.svg)](https://github.com/agdamsbo/REDCapCAST/actions/workflows/pages/pages-build-deployment) [![CRAN downloads](https://cranlogs.r-pkg.org/badges/grand-total/REDCapCAST)](https://cran.r-project.org/package=REDCapCAST) [![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html) [![Codecov test coverage](https://codecov.io/gh/agdamsbo/REDCapCAST/graph/badge.svg)](https://app.codecov.io/gh/agdamsbo/REDCapCAST)
<!-- badges: end -->
# REDCapCAST package <img src="man/figures/logo.png" align="right" />
# REDCapCAST package <img src="man/figures/logo.png" align="right"/>
REDCap database casting and handling of castellated data when using repeated instruments and longitudinal projects.
Casting metadata for REDCap database creation and handling of castellated data using repeated instruments and longitudinal projects in REDCap.
This package is a fork of [pegeler/REDCapRITS](https://github.com/pegeler/REDCapRITS). The REDCapRITS represents great and extensive work to handle castellated REDCap data in different programming languages. This fork is purely minded on R usage and includes a few implementations of the main `REDCap_split` function.
This is implemented with
This package is very much to be seen as an attempt at a R-to-REDCap-to-R foundry for handling both the transition from dataset/variable list to database and the other way, from REDCap database to a tidy dataset. The goal was also to allow for a "minimal data" approach by allowing to filter records, instruments and variables in the export to only download data needed. I think this approach is desirable for handling sensitive, clinical data. Please refer to [REDCap-Tools](https://redcap-tools.github.io/) for other great tools for working with REDCap in R.
- An app-interface for easy database creation [accessible here](https://agdamsbo.shinyapps.io/redcapcast/) or available to run locally with `shiny_cast()` allowing you to easily create a REDCap database based on an existing spreadsheet.
## Use and immprovements
- Export data from REDCap in different formats handling castellated data, and on default only export requested data, this is mainly through `read_redcap_tables()`.
Here is just a short description of the main functions:
REDCapCAST was initially build on, and still includes code from [pegeler/REDCapRITS](https://github.com/pegeler/REDCapRITS), and relies on functions from the [`REDCapR`](https://ouhscbbmc.github.io/REDCapR/)-project
* `REDcap_split()`: Works largely as the original `REDCapRITS::REDCap_split()`. It takes a REDCap dataset and metadata (data dictionary) to split the data set into a list of dataframes of instruments.
## History
* `read_redcap_tables()`: wraps the use of [`REDCapR::redcap_read()`](https://github.com/OuhscBbmc/REDCapR) with `REDCap_split()` to ease the export of REDCap data. Default output is a list of data frames with one data frame for each REDCap instrument.
This package was originally forked from [pegeler/REDCapRITS](https://github.com/pegeler/REDCapRITS). The `REDCapRITS` represents great and extensive work to handle castellated REDCap data in different programming languages. REDCapCAST has evolved into much more than just handling castellated data and so has been detatched from the original project while still relying on the main `REDCap_split` function. All access to the REDCap database is build on the outstanding work in [`REDCapR`](#0).
* `redcap_wider()`: joins and pivots a list of data frames with repeated instruments to a wide format utilizing the [`tidyr::pivot_wider()`](https://tidyr.tidyverse.org/reference/pivot_wider.html) from the [tidyverse](https://www.tidyverse.org/).
This package really started out of frustration during my PhD in health science hearing colleagues complaining about that "castellated" data formatting of REDCap exports when doing longitudinal projects and being used to wide data. This led to some bad decisions in building databases avoiding repeated instruments. This package solves these challenges, but solutions are also implemented else where like the [redcapAPI](https://github.com/vubiostat/redcapAPI) or [REDCapTidieR](https://github.com/CHOP-CGTInformatics/REDCapTidieR) packages, which are bigger project.
* `easy_redcap()`: combines secure API key storage with the `keyring`-package, focused data retrieval and optional widening. This is the recommended approach for easy data access and analysis.
To help new PhD students and other researchers, I have also worked on creating a few helper/wrapper-functions to ease data access. Documentation is on it's way.
* `ds2dd_detailed()`: Converts a data set to a data dictionary for upload to a new REDCap database. Variables (fields) and instruments in a REDCap data base are defined by this data dictionary.
* `shiny_cast()`: [Shiny](https://www.rstudio.com/products/shiny/) application to ease the process of converting a spreadsheet/data set to a REDCap database. The app runs locally and data is transferred securely. You can just create and upload the data dictionary, but you can also transfer the given data in the same process.
For any more advanced uses, consider using the [`redcapAPI`](https://github.com/vubiostat/redcapAPI) or [`REDCapR`](https://ouhscbbmc.github.io/REDCapR/) packages.
## Future
The plan with this package is to be bundled with a Handbook on working with REDCap from R. This work is in progress but is limited by the time available. Please feel free to contact me or create and issue with ideas for future additions.
## Installation
## Installation and use
The package is available on CRAN. Install the latest version:
```
```
install.packages("REDCapCAST")
```
Install the latest version directly from GitHub:
```
```
require("remotes")
remotes::install_github("agdamsbo/REDCapCAST")
```
Launch the REDCapCAST app interface directly on your own machine:
```
REDCapCAST::shiny_cast()
```
## Code of Conduct
Please note that the REDCapCAST project is released with a [Contributor Code of Conduct](https://agdamsbo.github.io/REDCapCAST/CODE_OF_CONDUCT.html). By contributing to this project, you agree to abide by its terms.

View file

@ -1,4 +1,5 @@
Version: 1.0
ProjectId: d97cf790-0785-4be6-9651-e02a4867726b
RestoreWorkspace: No
SaveWorkspace: No
@ -18,4 +19,5 @@ StripTrailingWhitespace: Yes
BuildType: Package
PackageUseDevtools: Yes
PackageInstallArgs: --no-multiarch --with-keep.source
PackageCheckArgs: --as-cran
PackageRoxygenize: rd,collate,namespace,vignette

View file

@ -1,81 +0,0 @@
server <- function(input, output, session) {
require(REDCapCAST)
dat <- shiny::reactive({
shiny::req(input$ds)
read_input(input$ds$datapath)
})
dd <- shiny::reactive({
ds2dd_detailed(data = dat())
})
output$data.tbl <- shiny::renderTable({
dd() |>
purrr::pluck("data") |>
head(20) |>
dplyr::tibble()
})
output$meta.tbl <- shiny::renderTable({
dd() |>
purrr::pluck("meta") |>
dplyr::tibble()
})
# Downloadable csv of dataset ----
output$downloadData <- shiny::downloadHandler(
filename = "data_ready.csv",
content = function(file) {
write.csv(purrr::pluck(dd(), "data"), file, row.names = FALSE)
}
)
# Downloadable csv of data dictionary ----
output$downloadMeta <- shiny::downloadHandler(
filename = "dictionary_ready.csv",
content = function(file) {
write.csv(purrr::pluck(dd(), "meta"), file, row.names = FALSE)
}
)
output_staging <- shiny::reactiveValues()
output_staging$meta <- output_staging$data <- NA
shiny::observeEvent(input$upload.meta,{ upload_meta() })
shiny::observeEvent(input$upload.data,{ upload_data() })
upload_meta <- function(){
shiny::req(input$uri)
shiny::req(input$api)
output_staging$meta <- REDCapR::redcap_metadata_write(
ds = purrr::pluck(dd(), "meta"),
redcap_uri = input$uri,
token = input$api
)|> purrr::pluck("success")
}
upload_data <- function(){
shiny::req(input$uri)
shiny::req(input$api)
output_staging$data <- REDCapR::redcap_write(
ds = purrr::pluck(dd(), "data"),
redcap_uri = input$uri,
token = input$api
) |> purrr::pluck("success")
}
output$upload.meta.print <- renderText(output_staging$meta)
output$upload.data.print <- renderText(output_staging$data)
}

View file

@ -1,89 +0,0 @@
ui <- shiny::fluidPage(
## -----------------------------------------------------------------------------
## Application title
## -----------------------------------------------------------------------------
shiny::titlePanel("Simple REDCap data base creation and data upload from data set file via API",
windowTitle = "REDCap databse creator"
),
shiny::h5("Please note, that this tool serves as a demonstration of some of the functionality
of the REDCapCAST package. No responsibility for data loss or any other
problems will be taken."),
## -----------------------------------------------------------------------------
## Side panel
## -----------------------------------------------------------------------------
shiny::sidebarPanel(
shiny::h4("REDCap database and dataset"),
shiny::fileInput("ds", "Choose data file",
multiple = FALSE,
accept = c(
".csv",
".xls",
".xlsx",
".dta"
)
),
shiny::h6("Below you can download the dataset formatted for upload and the
corresponding data dictionary for a new data base."),
# Button
shiny::downloadButton("downloadData", "Download data"),
# Button
shiny::downloadButton("downloadMeta", "Download dictionary"),
# Horizontal line ----
shiny::tags$hr(),
shiny::h4("REDCap upload"),
shiny::textInput(
inputId = "uri",
label = "URI",
value = "https://redcap.your.institution/api/"
),
shiny::textInput(
inputId = "api",
label = "API key",
value = ""
),
shiny::actionButton(
inputId = "upload.meta",
label = "Upload dictionary", icon = shiny::icon("book-bookmark")
),
shiny::h6("Please note, that before uploading any real data, put your project
into production mode."),
shiny::actionButton(
inputId = "upload.data",
label = "Upload data", icon = shiny::icon("upload")
),
# Horizontal line ----
shiny::tags$hr()
),
shiny::mainPanel(
shiny::tabsetPanel(
## -----------------------------------------------------------------------------
## Summary tab
## -----------------------------------------------------------------------------
shiny::tabPanel(
"Summary",
shiny::h3("Data overview (first 20)"),
shiny::htmlOutput("data.tbl", container = shiny::span),
shiny::h3("Dictionary overview"),
shiny::htmlOutput("meta.tbl", container = shiny::span)
),
## -----------------------------------------------------------------------------
## Upload tab
## -----------------------------------------------------------------------------
shiny::tabPanel(
"Upload",
shiny::h3("Meta upload overview"),
shiny::htmlOutput("upload.meta.print", container = shiny::span),
shiny::h3("Data upload overview"),
shiny::htmlOutput("upload.data.print", container = shiny::span)
)
)
)
)

View file

@ -1,20 +1,10 @@
── R CMD check results ───────────────────────────────────────────────────────────────────────────────── REDCapCAST 25.3.2 ────
Duration: 37.1s
0 errors ✔ | 0 warnings ✔ | 0 notes ✔
R CMD check succeeded
## Test environments
- R-hub windows-x86_64-devel (r-devel)
- R-hub ubuntu-gcc-release (r-release)
- R-hub fedora-clang-devel (r-devel)
## R CMD check results
On windows-x86_64-devel (r-devel)
checking for non-standard things in the check directory ... NOTE
Found the following files/directories:
''NULL''
On windows-x86_64-devel (r-devel)
checking for detritus in the temp directory ... NOTE
'lastMiKTeXException'
On ubuntu-gcc-release (r-release), fedora-clang-devel (r-devel)
checking HTML version of manual ... NOTE
Skipping checking HTML validation: no command 'tidy' found
0 errors ✔ | 0 warnings ✔ | 3 notes ✖
Rhubv2 runs and checks out.

View file

@ -9,3 +9,11 @@ mtcars_redcap |>
write.csv(here::here("data/mtcars_redcap.csv"), row.names = FALSE)
usethis::use_data(mtcars_redcap, overwrite = TRUE)
gtsummary::trial|>
dplyr::mutate(
record_id = dplyr::row_number()
) |>
dplyr::select(record_id, dplyr::everything())|>
write.csv(here::here("drafting/trials_redcap.csv"), row.names = FALSE)

View file

@ -11,3 +11,5 @@ redcapcast_data <- REDCapR::redcap_read(
# widen.data = FALSE)
usethis::use_data(redcapcast_data, overwrite = TRUE)
# write.csv(redcapcast_data,here::here("data/redcapcast_data.csv"),row.names = FALSE)

26
data/redcapcast_data.csv Normal file
View file

@ -0,0 +1,26 @@
"record_id","redcap_event_name","redcap_repeat_instrument","redcap_repeat_instance","cpr","inclusion","inclusion_time","dob","age","age_integer","sex","cohabitation","hypertension","diabetes","region","baseline_data_start_complete","mrs_assessed","mrs_date","mrs_score","mrs_complete","con_mrs","con_calc","consensus_complete","event_datetime","event_age","event_type","new_event_complete"
1,"inclusion",NA,NA,"1203401OB4",2023-03-13,12:38:49,1940-03-12,83.0023888238636,83,"female","Yes","No","Yes","East","Incomplete","Yes",2023-03-13,1,"Incomplete",NA,NA,NA,NA,NA,NA,NA
2,"inclusion",NA,NA,"0102342303",2023-03-01,10:38:57,1934-02-01,89.0778044723711,89,"male","Yes","No","No","South","Incomplete","Yes",2023-03-07,1,"Incomplete",NA,NA,NA,NA,NA,NA,NA
2,"follow1",NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,"Yes",2023-03-09,3,"Incomplete",NA,NA,"Incomplete",NA,NA,NA,NA
2,"follow1","New Event (?)",1,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:49:42,NA,"TIA","Incomplete"
3,"inclusion",NA,NA,"2301569823",2022-03-08,12:01:07,1956-01-23,66.1231921257795,66,"male","No","Yes","Yes","North","Incomplete",NA,NA,NA,"Incomplete",NA,NA,NA,NA,NA,NA,NA
3,"follow1",NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,"Yes",2022-08-16,2,"Incomplete",NA,NA,"Incomplete",NA,NA,NA,NA
3,"follow2",NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,"Yes",2023-03-13,1,"Incomplete",NA,NA,"Incomplete",NA,NA,NA,NA
3,"follow1","New Event (?)",1,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:49:58,NA,"AIS","Incomplete"
3,"follow1","New Event (?)",2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:50:01,NA,"ICH","Incomplete"
3,"follow2","New Event (?)",1,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:50:05,NA,"ICH","Incomplete"
3,"follow2","New Event (?)",2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:50:07,NA,"TIA","Incomplete"
3,"follow2","New Event (?)",3,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:50:09,NA,"AIS","Incomplete"
4,"inclusion",NA,NA,"0204051342",2023-03-14,20:39:19,1905-04-02,117.949033861065,117,"female",NA,NA,NA,NA,"Incomplete",NA,NA,NA,"Incomplete",NA,NA,NA,NA,NA,NA,NA
4,"follow1",NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,"Incomplete",NA,NA,"Incomplete",NA,NA,NA,NA
4,"follow2",NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,"Incomplete",NA,NA,"Incomplete",NA,NA,NA,NA
4,"follow1","New Event (?)",1,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2001-04-11 08:39:05,96,"TIA","Complete"
4,"follow1","New Event (?)",2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2010-04-11 08:39:25,105,"TIA","Complete"
4,"follow2","New Event (?)",1,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:50:19,118,"AIS","Complete"
4,"follow2","New Event (?)",2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:50:22,118,"ICH","Incomplete"
4,"follow2","New Event (?)",3,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-01-18 12:50:24,118,"Unknown","Complete"
5,"inclusion",NA,NA,"0201976043",2023-03-23,08:50:31,1897-01-02,126.21751302217,126,"male","No","Yes","Yes","East","Complete",NA,NA,NA,"Incomplete",NA,NA,NA,NA,NA,NA,NA
5,"follow1",NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,"Incomplete",NA,NA,"Incomplete",NA,NA,NA,NA
5,"follow1","New Event (?)",1,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-04-11 09:00:33,127,"AIS","Complete"
5,"follow1","New Event (?)",2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,2024-04-11 09:00:41,127,"ICH","Complete"
6,"inclusion",NA,NA,"1202320122",2024-01-25,08:49:28,1932-02-12,91.952606829709,91,"female","No","Yes","No","East","Complete",NA,NA,NA,"Incomplete",NA,NA,NA,NA,NA,NA,NA
1 record_id redcap_event_name redcap_repeat_instrument redcap_repeat_instance cpr inclusion inclusion_time dob age age_integer sex cohabitation hypertension diabetes region baseline_data_start_complete mrs_assessed mrs_date mrs_score mrs_complete con_mrs con_calc consensus_complete event_datetime event_age event_type new_event_complete
2 1 inclusion NA NA 1203401OB4 2023-03-13 12:38:49 1940-03-12 83.0023888238636 83 female Yes No Yes East Incomplete Yes 2023-03-13 1 Incomplete NA NA NA NA NA NA NA
3 2 inclusion NA NA 0102342303 2023-03-01 10:38:57 1934-02-01 89.0778044723711 89 male Yes No No South Incomplete Yes 2023-03-07 1 Incomplete NA NA NA NA NA NA NA
4 2 follow1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA Yes 2023-03-09 3 Incomplete NA NA Incomplete NA NA NA NA
5 2 follow1 New Event (?) 1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:49:42 NA TIA Incomplete
6 3 inclusion NA NA 2301569823 2022-03-08 12:01:07 1956-01-23 66.1231921257795 66 male No Yes Yes North Incomplete NA NA NA Incomplete NA NA NA NA NA NA NA
7 3 follow1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA Yes 2022-08-16 2 Incomplete NA NA Incomplete NA NA NA NA
8 3 follow2 NA NA NA NA NA NA NA NA NA NA NA NA NA NA Yes 2023-03-13 1 Incomplete NA NA Incomplete NA NA NA NA
9 3 follow1 New Event (?) 1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:49:58 NA AIS Incomplete
10 3 follow1 New Event (?) 2 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:50:01 NA ICH Incomplete
11 3 follow2 New Event (?) 1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:50:05 NA ICH Incomplete
12 3 follow2 New Event (?) 2 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:50:07 NA TIA Incomplete
13 3 follow2 New Event (?) 3 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:50:09 NA AIS Incomplete
14 4 inclusion NA NA 0204051342 2023-03-14 20:39:19 1905-04-02 117.949033861065 117 female NA NA NA NA Incomplete NA NA NA Incomplete NA NA NA NA NA NA NA
15 4 follow1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA Incomplete NA NA Incomplete NA NA NA NA
16 4 follow2 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA Incomplete NA NA Incomplete NA NA NA NA
17 4 follow1 New Event (?) 1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2001-04-11 08:39:05 96 TIA Complete
18 4 follow1 New Event (?) 2 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2010-04-11 08:39:25 105 TIA Complete
19 4 follow2 New Event (?) 1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:50:19 118 AIS Complete
20 4 follow2 New Event (?) 2 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:50:22 118 ICH Incomplete
21 4 follow2 New Event (?) 3 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-01-18 12:50:24 118 Unknown Complete
22 5 inclusion NA NA 0201976043 2023-03-23 08:50:31 1897-01-02 126.21751302217 126 male No Yes Yes East Complete NA NA NA Incomplete NA NA NA NA NA NA NA
23 5 follow1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA Incomplete NA NA Incomplete NA NA NA NA
24 5 follow1 New Event (?) 1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-04-11 09:00:33 127 AIS Complete
25 5 follow1 New Event (?) 2 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA 2024-04-11 09:00:41 127 ICH Complete
26 6 inclusion NA NA 1202320122 2024-01-25 08:49:28 1932-02-12 91.952606829709 91 female No Yes No East Complete NA NA NA Incomplete NA NA NA NA NA NA NA

Binary file not shown.

Binary file not shown.

View file

@ -1,67 +1,105 @@
Andreas
Assesment
CMD
Codecov
DEPRICATED
DOI
DataDictionary
GStat
Gammelgaard
Github
GithubActions
JSON
Lifecycle
METACRAN
MMRM
Nav
ORCID
POSIXct
Pivotting
README
REDCap
REDCapR
REDCapRITS
THe
UI
Whishes
REDCapTidieR
Stackoverflow
WD
al
api
attr
calc
capitalisation
charater
cond
cpr
da
dafault
datadictionary
datetime
demonstrational
detatched
dir
dmy
docx
doi
dplyr
dropdown
droplevels
ds
dta
et
factorises
factorising
fct
forcats
github
gtsummary
gues
hms
immprovements
https
io
jbi
keyring
labelled
labelling
mRS
matadata
md
mdy
mis
mrs
mtcars
na
natively
ncol
og
param
params
pegeler
perl
pos
pre
rds
readr
rsconnect
realising
redcapAPI
redcapcast
renv
runApp
sel
sep
seperator
shinyapps
shinylive
stRoke
stata
strsplit
subheader
textclean
thorugh
tibble
tidyverse
transistion
trinker
truefalse
ui
un
unlabelled
uri
vec
wil
ymd

View file

@ -0,0 +1,372 @@
library(bslib)
library(shiny)
library(openxlsx2)
library(haven)
library(readODS)
library(readr)
library(dplyr)
library(gt)
library(devtools)
# if (!requireNamespace("REDCapCAST")) {
# install.packages("REDCapCAST")
# }
# library(REDCapCAST)
## Load merged files for shinyapps.io hosting
if (file.exists(here::here("functions.R"))) {
source(here::here("functions.R"))
}
server <- function(input, output, session) {
v <- shiny::reactiveValues(
file = NULL
)
ds <- shiny::reactive({
shiny::req(input$ds)
out <- read_input(input$ds$datapath)
out <- out |>
## Parses data with readr functions
parse_data() |>
## Converts logical to factor, preserving attributes with own function
dplyr::mutate(dplyr::across(dplyr::where(is.logical), as_factor))
out
})
dat <- shiny::reactive({
out <- ds()
if (!is.null(input$factor_vars)) {
out <- out |>
dplyr::mutate(
dplyr::across(
dplyr::all_of(input$factor_vars),
as_factor
)
)
}
if (input$factorize == "yes") {
out <- out |>
(\(.x){
suppressWarnings(
numchar2fct(.x)
)
})()
}
out
})
shiny::eventReactive(input$load_data, {
v$file <- "loaded"
})
# getData <- reactive({
# if(is.null(input$ds$datapath)) return(NULL)
# })
# output$uploaded <- reactive({
# return(!is.null(getData()))
# })
dd <- shiny::reactive({
shiny::req(input$ds)
# v$file <- "loaded"
ds2dd_detailed(
data = dat(),
add.auto.id = input$add_id == "yes",
metadata = c(
"field_name", "form_name", "section_header", "field_type",
"field_label", "select_choices_or_calculations", "field_note",
"text_validation_type_or_show_slider_number", "text_validation_min",
"text_validation_max", "identifier", "branching_logic", "required_field",
"custom_alignment", "question_number", "matrix_group_name", "matrix_ranking",
"field_annotation"
)
)
})
output$factor_vars <- shiny::renderUI({
shiny::req(input$ds)
selectizeInput(
inputId = "factor_vars",
selected = colnames(dat())[sapply(dat(), is.factor)],
label = "Covariables to format as categorical",
choices = colnames(dat()),
multiple = TRUE
)
})
## Specify ID if necessary
# output$id_var <- shiny::renderUI({
# shiny::req(input$ds)
# selectizeInput(
# inputId = "id_var",
# selected = colnames(dat())[1],
# label = "ID variable",
# choices = colnames(dat())[-match(colnames(dat()),input$factor_vars)],
# multiple = FALSE
# )
# })
output$data.tbl <- gt::render_gt(
dd() |>
cast_data_overview()
)
output$meta.tbl <- gt::render_gt(
dd() |>
cast_meta_overview()
)
# Downloadable csv of dataset ----
output$downloadData <- shiny::downloadHandler(
filename = "data_ready.csv",
content = function(file) {
write.csv(purrr::pluck(dd(), "data"), file, row.names = FALSE, na = "")
}
)
# Downloadable csv of data dictionary ----
output$downloadMeta <- shiny::downloadHandler(
filename = paste0("REDCapCAST_DataDictionary_", Sys.Date(), ".csv"),
content = function(file) {
write.csv(purrr::pluck(dd(), "meta"), file, row.names = FALSE, na = "")
}
)
# Downloadable .zip of instrument ----
output$downloadInstrument <- shiny::downloadHandler(
filename = paste0("REDCapCAST_instrument", Sys.Date(), ".zip"),
content = function(file) {
export_redcap_instrument(purrr::pluck(dd(), "meta"),
file = file,
record.id = ifelse(input$add_id == "none", NA, names(dat())[1])
)
}
)
output_staging <- shiny::reactiveValues()
output_staging$meta <- output_staging$data <- NA
shiny::observeEvent(input$upload.meta, {
upload_meta()
})
shiny::observeEvent(input$upload.data, {
upload_data()
})
upload_meta <- function() {
shiny::req(input$uri)
shiny::req(input$api)
output_staging$meta <- REDCapR::redcap_metadata_write(
ds = purrr::pluck(dd(), "meta"),
redcap_uri = input$uri,
token = input$api
) |> purrr::pluck("success")
}
upload_data <- function() {
shiny::req(input$uri)
shiny::req(input$api)
output_staging$data <- dd() |>
apply_factor_labels() |>
REDCapR::redcap_write(
redcap_uri = input$uri,
token = input$api
) |>
purrr::pluck("success")
}
output$upload.meta.print <- renderText(output_staging$meta)
output$upload.data.print <- renderText(output_staging$data)
output$uploaded <- shiny::reactive({
if (is.null(v$file)) {
"no"
} else {
"yes"
}
})
shiny::outputOptions(output, "uploaded", suspendWhenHidden = FALSE)
output$data.load <- shiny::renderText(expr = nrow(dat()))
# session$onSessionEnded(function() {
# # cat("Session Ended\n")
# unlink("www",recursive = TRUE)
# })
}
ui <-
bslib::page(
theme = bslib::bs_theme(preset = "united"),
title = "REDCap database creator",
bslib::page_navbar(
title = "Easy REDCap database creation",
sidebar = bslib::sidebar(
width = 300,
shiny::h5("Metadata casting"),
shiny::fileInput(
inputId = "ds",
label = "Upload spreadsheet",
multiple = FALSE,
accept = c(
".csv",
".xls",
".xlsx",
".dta",
".rds",
".ods"
)
),
shiny::actionButton(
inputId = "options",
label = "Show options",
icon = shiny::icon("wrench")
),
shiny::helpText("Choose and upload a dataset, then press the button for data modification and options for data download or upload."),
# For some odd reason this only unfolds when the preview panel is shown..
# This has been solved by adding an arbitrary button to load data - which was abandoned again
shiny::conditionalPanel(
# condition = "output.uploaded=='yes'",
condition = "input.options > 0",
shiny::radioButtons(
inputId = "add_id",
label = "Add ID, or use first column?",
selected = "no",
inline = TRUE,
choices = list(
"First column" = "no",
"Add ID" = "yes",
"No ID" = "none"
)
),
shiny::radioButtons(
inputId = "factorize",
label = "Factorize variables with few levels?",
selected = "yes",
inline = TRUE,
choices = list(
"Yes" = "yes",
"No" = "no"
)
),
shiny::radioButtons(
inputId = "specify_factors",
label = "Specify categorical variables?",
selected = "no",
inline = TRUE,
choices = list(
"Yes" = "yes",
"No" = "no"
)
),
shiny::conditionalPanel(
condition = "input.specify_factors=='yes'",
shiny::uiOutput("factor_vars")
),
# condition = "input.load_data",
# shiny::helpText("Below you can download the dataset formatted for upload and the
# corresponding data dictionary for a new data base, if you want to upload manually."),
shiny::tags$hr(),
shiny::h4("Download data for manual upload"),
shiny::helpText("Look further down for direct upload option"),
# Button
shiny::downloadButton(outputId = "downloadData", label = "Download renamed data"),
shiny::em("and then"),
# Button
shiny::downloadButton(outputId = "downloadMeta", label = "Download data dictionary"),
shiny::em("or"),
shiny::downloadButton(outputId = "downloadInstrument", label = "Download as instrument"),
# Horizontal line ----
shiny::tags$hr(),
shiny::radioButtons(
inputId = "upload_redcap",
label = "Upload directly to a REDCap server?",
selected = "no",
inline = TRUE,
choices = list(
"Yes" = "yes",
"No" = "no"
)
),
shiny::conditionalPanel(
condition = "input.upload_redcap=='yes'",
shiny::h4("2) Data base upload"),
shiny::helpText("This tool is usable for now. Detailed instructions are coming."),
shiny::textInput(
inputId = "uri",
label = "URI",
value = "https://redcap.your.institution/api/"
),
shiny::textInput(
inputId = "api",
label = "API key",
value = ""
),
shiny::helpText("An API key is an access key to the REDCap database. Please", shiny::a("see here for directions", href = "https://www.iths.org/news/redcap-tip/redcap-api-101/"), " to obtain an API key for your project."),
shiny::actionButton(
inputId = "upload.meta",
label = "Upload datadictionary", icon = shiny::icon("book-bookmark")
),
shiny::helpText("Please note, that before uploading any real data, put your project
into production mode."),
shiny::actionButton(
inputId = "upload.data",
label = "Upload data", icon = shiny::icon("upload")
)
)
),
shiny::br(),
shiny::br(),
shiny::br(),
shiny::p(
"License: ", shiny::a("GPL-3+", href = "https://agdamsbo.github.io/REDCapCAST/LICENSE.html")
),
shiny::p(
shiny::a("Package documentation", href = "https://agdamsbo.github.io/REDCapCAST")
)
),
bslib::nav_panel(
title = "Intro",
shiny::markdown(readLines("www/SHINYCAST.md")),
shiny::br(),
shiny::textOutput(outputId = "data.load")
),
# bslib::nav_spacer(),
bslib::nav_panel(
title = "Data preview",
gt::gt_output(outputId = "data.tbl")
# shiny::htmlOutput(outputId = "data.tbl", container = shiny::span)
),
bslib::nav_panel(
title = "Dictionary overview",
gt::gt_output(outputId = "meta.tbl")
# shiny::htmlOutput(outputId = "meta.tbl", container = shiny::span)
),
bslib::nav_panel(
title = "Upload",
shiny::h3("Meta upload overview"),
shiny::textOutput(outputId = "upload.meta.print"),
shiny::h3("Data upload overview"),
shiny::textOutput(outputId = "upload.data.print")
)
)
)
shiny::shinyApp(ui = ui, server = server)

View file

@ -0,0 +1,10 @@
name: redcapcast-dev
title:
username: agdamsbo
account: agdamsbo
server: shinyapps.io
hostUrl: https://api.shinyapps.io/v1
appId: 13463848
bundleId: 9425126
url: https://agdamsbo.shinyapps.io/redcapcast-dev/
version: 1

View file

@ -0,0 +1,10 @@
name: redcapcast
title:
username: agdamsbo
account: agdamsbo
server: shinyapps.io
hostUrl: https://api.shinyapps.io/v1
appId: 11351429
bundleId: 9642648
url: https://agdamsbo.shinyapps.io/redcapcast/
version: 1

View file

@ -0,0 +1,68 @@
# ![](logo.png) REDCapCAST app
Welcome to the REDCapCAST app to create/cast REDCap database metadata. This is app allows you to create a new REDCap data base or instrument based on a simple spreadsheet.
## Disclaimer
This tool is aimed at demonstrating use of REDCapCAST. The app can be run locally or on a hosted server (will save no data anywhere). No responsibility for data loss or any other problems will be taken.
Also, this tool will not produce a ready-for-prime-time database, but it will be a comprehensive framework with suggestions for data-classes. You will need to go through your database afterwards and take your time to ensure everything is as you'd expect and work as intended.
## Overview
The functions of this app can be described in two parts:
1. create REDCap metadata files like data dictionary or instrument based on a spreadsheet (.csv/.xls(x)/.dta/.ods) for download and manual upload to your REDCap server or
2. upload the created database file and data to a given REDCap server via API access.
## Getting started
On the left, you initially just find one single option to upload a spreadsheet. Having done this, you can then preview the uploaded data and generated data dictionary by selecting the relevant tab on the top right.
### REDCap database files creation
The spreadsheet column names will be adjusted to comply with REDCap naming criteria, and a renamed (adjusted) spreadsheet can be downloaded. If your spreadsheet columns are labelled (exported from stata or labelled in R, these labels will be used for the visible field names (field label) i REDCap).
Based on the uploaded spreadsheet, the app will make a qualified guess on data classes and if the data is labelled (like .rda or .dta) all this information will be included in the data dictionary file. The default data format is "text". In addition categorical variables can be specified manually, and you caon add an ID column , or assume the first column is the ID (please reorder before export).
If you want to add data to an existing database, an instrument can be created. This metadata file is identical to a data dictionary, but does not include the ID field (if included or added) and is packaged as a .zip file, which is uploaded in the "Designer" interface in REDCap.
### Transferring directly to a REDCap database
This feature is mainly a show-case. Use it if you like, but most will feel more secure doing manual uploads.
Based on the API-functions in REDCap, you can upload your data dictionary and renamed data directly from this interface (no data is stored on the server, but consider launching this shiny app on your own machine after having installed the [REDCapCAST package](https://agdamsbo.github.io/REDCapCAST/#installation) in R). Launch a local instance of this app with:
```
REDCapCAST::shiny_cast()
```
Please mind you, that uploading a new data dictionary can delete data in your database and is non-reversible. Make sure to save a backup beforehand. Also, uploading a data dictionary to a server in production is not possible. This step is only advisable for newly created databases. See the "Disclaimer" above.
## Background
The main structure of variables of a REDCap database is defined by a so-called data dictionary. This is a simple spreadsheet file defining one or more instruments, data classes, branching logic and more. It does not contain any information on randomization, longitudinal data or repeatable instruments. These functions must be set up in the REDCap interface after having defined the data dictionary.
## Motivation
This tool has been created out of frustration with the lack of easy-to-use tools available and with a hope to help colleagues and others to easily create and extend REDCap databases.
## Use and feedback
Please, if you use this tool, don't hesitate to contact me with feedback if something doesn't work as expected. But, please also mind the disclaimer above. Contact information can be found on the [package documentation page](https://agdamsbo.github.io/REDCapCAST/).
## Citing
This app and package can be cited using the following bibtex citation or by referencing the following doi-identifier: [10.5281/zenodo.8013984](https://doi.org/10.5281/zenodo.8013984)
```
@agdamsboREDCapCAST{,
title = {REDCapCAST: REDCap Castellated Data Handling and Metadata Casting},
author = {Andreas Gammelgaard Damsbo},
year = {2024},
note = {R package version 24.11.2, https://agdamsbo.github.io/REDCapCAST/},
url = {https://github.com/agdamsbo/REDCapCAST},
doi = {10.5281/zenodo.8013984},
}
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.1 KiB

29
man/REDCapCAST-package.Rd Normal file
View file

@ -0,0 +1,29 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/REDCapCAST-package.R
\docType{package}
\name{REDCapCAST-package}
\alias{REDCapCAST}
\alias{REDCapCAST-package}
\title{REDCapCAST: REDCap Metadata Casting and Castellated Data Handling}
\description{
Casting metadata for REDCap database creation and handling of castellated data using repeated instruments and longitudinal projects in 'REDCap'. Keeps a focused data export approach, by allowing to only export required data from the database. Also for casting new REDCap databases based on datasets from other sources. Originally forked from the R part of 'REDCapRITS' by Paul Egeler. See \url{https://github.com/pegeler/REDCapRITS}. 'REDCap' (Research Electronic Data Capture) is a secure, web-based software platform designed to support data capture for research studies, providing 1) an intuitive interface for validated data capture; 2) audit trails for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages; and 4) procedures for data integration and interoperability with external sources (Harris et al (2009) \doi{10.1016/j.jbi.2008.08.010}; Harris et al (2019) \doi{10.1016/j.jbi.2019.103208}).
}
\seealso{
Useful links:
\itemize{
\item \url{https://github.com/agdamsbo/REDCapCAST}
\item \url{https://agdamsbo.github.io/REDCapCAST/}
\item Report bugs at \url{https://github.com/agdamsbo/REDCapCAST/issues}
}
}
\author{
\strong{Maintainer}: Andreas Gammelgaard Damsbo \email{agdamsbo@clin.au.dk} (\href{https://orcid.org/0000-0002-7559-1154}{ORCID})
Authors:
\itemize{
\item Paul Egeler \email{paulegeler@gmail.com} (\href{https://orcid.org/0000-0001-6948-9498}{ORCID})
}
}
\keyword{internal}

View file

@ -21,8 +21,7 @@ call.}
JSON from an API call.}
\item{primary_table_name}{Name given to the list element for the primary
output table (as described in \emph{README.md}). Ignored if
\code{forms = 'all'}.}
output table. Ignored if \code{forms = 'all'}.}
\item{forms}{Indicate whether to create separate tables for repeating
instruments only or for all forms.}
@ -66,7 +65,7 @@ metadata <- postForm(
)
# Convert exported JSON strings into a list of data.frames
REDCapRITS::REDCap_split(records, metadata)
REDCapCAST::REDCap_split(records, metadata)
# Using a raw data export -------------------------------------------------
@ -79,7 +78,7 @@ metadata <- read.csv(
)
# Split the tables
REDCapRITS::REDCap_split(records, metadata)
REDCapCAST::REDCap_split(records, metadata)
# In conjunction with the R export script ---------------------------------
@ -96,10 +95,10 @@ source("ExampleProject_R_2018-06-03_1700.r")
metadata <- read.csv("ExampleProject_DataDictionary_2018-06-03.csv")
# Split the tables
REDCapRITS::REDCap_split(data, metadata)
REDCapCAST::REDCap_split(data, metadata)
setwd(old)
}
}
\author{
Paul W. Egeler, M.S., GStat
Paul W. Egeler
}

20
man/all_na.Rd Normal file
View file

@ -0,0 +1,20 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ds2dd_detailed.R
\name{all_na}
\alias{all_na}
\title{Check if vector is all NA}
\usage{
all_na(data)
}
\arguments{
\item{data}{vector of data.frame}
}
\value{
logical
}
\description{
Check if vector is all NA
}
\examples{
rep(NA, 4) |> all_na()
}

View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/read_redcap_tables.R
\name{apply_factor_labels}
\alias{apply_factor_labels}
\title{Preserve all factor levels from REDCap data dictionary in data export}
\usage{
apply_factor_labels(data, meta = NULL)
}
\arguments{
\item{data}{REDCap exported data set}
\item{meta}{REDCap data dictionary}
}
\value{
data.frame
}
\description{
Preserve all factor levels from REDCap data dictionary in data export
}

19
man/apply_field_label.Rd Normal file
View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/read_redcap_tables.R
\name{apply_field_label}
\alias{apply_field_label}
\title{Apply REDCap filed labels to data frame}
\usage{
apply_field_label(data, meta)
}
\arguments{
\item{data}{REDCap exported data set}
\item{meta}{REDCap data dictionary}
}
\value{
data.frame
}
\description{
Apply REDCap filed labels to data frame
}

90
man/as_factor.Rd Normal file
View file

@ -0,0 +1,90 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_factor.R
\name{as_factor}
\alias{as_factor}
\alias{as_factor.factor}
\alias{as_factor.logical}
\alias{as_factor.numeric}
\alias{as_factor.character}
\alias{as_factor.haven_labelled}
\alias{as_factor.labelled}
\alias{as_factor.data.frame}
\title{Convert labelled vectors to factors while preserving attributes}
\usage{
as_factor(x, ...)
\method{as_factor}{factor}(x, ...)
\method{as_factor}{logical}(x, ...)
\method{as_factor}{numeric}(x, ...)
\method{as_factor}{character}(x, ...)
\method{as_factor}{haven_labelled}(
x,
levels = c("default", "labels", "values", "both"),
ordered = FALSE,
...
)
\method{as_factor}{labelled}(
x,
levels = c("default", "labels", "values", "both"),
ordered = FALSE,
...
)
\method{as_factor}{data.frame}(x, ..., only_labelled = TRUE)
}
\arguments{
\item{x}{Object to coerce to a factor.}
\item{...}{Other arguments passed down to method.}
\item{levels}{How to create the levels of the generated factor:
* "default": uses labels where available, otherwise the values.
Labels are sorted by value.
* "both": like "default", but pastes together the level and value
* "label": use only the labels; unlabelled values become `NA`
* "values": use only the values}
\item{ordered}{If `TRUE` create an ordered (ordinal) factor, if
`FALSE` (the default) create a regular (nominal) factor.}
\item{only_labelled}{Only apply to labelled columns?}
}
\description{
This extends \link[forcats]{as_factor} as well as \link[haven]{as_factor}, by appending
original attributes except for "class" after converting to factor to avoid
ta loss in case of rich formatted and labelled data.
}
\details{
Please refer to parent functions for extended documentation.
To avoid redundancy calls and errors, functions are copy-pasted here
Empty variables with empty levels attribute are interpreted as logicals
}
\examples{
# will preserve all attributes
c(1, 4, 3, "A", 7, 8, 1) |> as_factor()
structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10)
) |>
as_factor() |>
dput()
structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10),
class = "haven_labelled"
) |>
as_factor() |> class()
structure(rep(NA,10),
class = c("labelled")
) |>
as_factor() |> summary()
rep(NA,10) |> as_factor()
}

58
man/as_logical.Rd Normal file
View file

@ -0,0 +1,58 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_logical.R
\name{as_logical}
\alias{as_logical}
\alias{as_logical.data.frame}
\alias{as_logical.default}
\title{Interpret specific binary values as logicals}
\usage{
as_logical(
x,
values = list(c("TRUE", "FALSE"), c("Yes", "No"), c(1, 0), c(1, 2)),
...
)
\method{as_logical}{data.frame}(
x,
values = list(c("TRUE", "FALSE"), c("Yes", "No"), c(1, 0), c(1, 2)),
...
)
\method{as_logical}{default}(
x,
values = list(c("TRUE", "FALSE"), c("Yes", "No"), c(1, 0), c(1, 2)),
...
)
}
\arguments{
\item{x}{vector or data.frame}
\item{values}{list of values to interpret as logicals. First value is}
\item{...}{ignored
interpreted as TRUE.}
}
\value{
vector
}
\description{
Interpret specific binary values as logicals
}
\examples{
c(sample(c("TRUE", "FALSE"), 20, TRUE), NA) |>
as_logical() |>
class()
ds <- dplyr::tibble(
B = factor(sample(c(1, 2), 20, TRUE)),
A = factor(sample(c("TRUE", "FALSE"), 20, TRUE)),
C = sample(c(3, 4), 20, TRUE),
D = factor(sample(c("In", "Out"), 20, TRUE))
)
ds |>
as_logical() |>
sapply(class)
ds$A |> class()
sample(c("TRUE",NA), 20, TRUE) |>
as_logical()
as_logical(0)
}

View file

@ -0,0 +1,31 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/doc2dd.R
\name{case_match_regex_list}
\alias{case_match_regex_list}
\title{List-base regex case_when}
\usage{
case_match_regex_list(data, match.list, .default = NA)
}
\arguments{
\item{data}{vector}
\item{match.list}{list of case matches}
\item{.default}{Default value for non-matches. Default is NA.}
}
\value{
vector
}
\description{
Mimics case_when for list of regex patterns and values. Used for date/time
validation generation from name vector. Like case_when, the matches are in
order of priority.
Primarily used in REDCapCAST to do data type coding from systematic variable
naming.
}
\examples{
case_match_regex_list(
c("test_date", "test_time", "test_tida", "test_tid"),
list(date_dmy = "_dat[eo]$", time_hh_mm_ss = "_ti[md]e?$")
)
}

17
man/cast_data_overview.Rd Normal file
View file

@ -0,0 +1,17 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shiny_cast.R
\name{cast_data_overview}
\alias{cast_data_overview}
\title{Overview of REDCapCAST data for shiny}
\usage{
cast_data_overview(data)
}
\arguments{
\item{data}{list with class 'REDCapCAST'}
}
\value{
gt object
}
\description{
Overview of REDCapCAST data for shiny
}

17
man/cast_meta_overview.Rd Normal file
View file

@ -0,0 +1,17 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shiny_cast.R
\name{cast_meta_overview}
\alias{cast_meta_overview}
\title{Overview of REDCapCAST meta data for shiny}
\usage{
cast_meta_overview(data)
}
\arguments{
\item{data}{list with class 'REDCapCAST'}
}
\value{
gt object
}
\description{
Overview of REDCapCAST meta data for shiny
}

26
man/char2choice.Rd Normal file
View file

@ -0,0 +1,26 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/doc2dd.R
\name{char2choice}
\alias{char2choice}
\title{Simple function to generate REDCap choices from character vector}
\usage{
char2choice(data, char.split = "/", raw = NULL, .default = NA)
}
\arguments{
\item{data}{vector}
\item{char.split}{splitting character(s)}
\item{raw}{specific values. Can be used for options of same length.}
\item{.default}{default value for missing. Default is NA.}
}
\value{
vector
}
\description{
Simple function to generate REDCap choices from character vector
}
\examples{
char2choice(c("yes/no"," yep. / nope ","",NA,"what"),.default=NA)
}

35
man/char2cond.Rd Normal file
View file

@ -0,0 +1,35 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/doc2dd.R
\name{char2cond}
\alias{char2cond}
\title{Simple function to generate REDCap branching logic from character vector}
\usage{
char2cond(
data,
minor.split = ",",
major.split = ";",
major.sep = " or ",
.default = NA
)
}
\arguments{
\item{data}{vector}
\item{minor.split}{minor split}
\item{major.split}{major split}
\item{major.sep}{argument separation. Default is " or ".}
\item{.default}{default value for missing. Default is NA.}
}
\value{
vector
}
\description{
Simple function to generate REDCap branching logic from character vector
}
\examples{
#data <- dd_inst$betingelse
#c("Extubation_novent, 2; Pacu_delay, 1") |> char2cond()
}

22
man/clean_field_label.Rd Normal file
View file

@ -0,0 +1,22 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/read_redcap_tables.R
\name{clean_field_label}
\alias{clean_field_label}
\title{Very simple function to remove rich text formatting from field label
and save the first paragraph ('<p>...</p>').}
\usage{
clean_field_label(data)
}
\arguments{
\item{data}{field label}
}
\value{
character vector
}
\description{
Very simple function to remove rich text formatting from field label
and save the first paragraph ('<p>...</p>').
}
\examples{
clean_field_label("<div class=\"rich-text-field-label\"><p>Fazekas score</p></div>")
}

View file

@ -17,3 +17,6 @@ Stepwise removal on non-alphanumeric characters, trailing white space,
substitutes spaces for underscores and converts to lower case.
Trying to make up for different naming conventions.
}
\examples{
"Research!, ne:ws? and c;l-.ls" |> clean_redcap_name()
}

31
man/compact_vec.Rd Normal file
View file

@ -0,0 +1,31 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ds2dd_detailed.R
\name{compact_vec}
\alias{compact_vec}
\title{Compacting a vector of any length with or without names}
\usage{
compact_vec(data, nm.sep = ": ", val.sep = "; ")
}
\arguments{
\item{data}{vector, optionally named}
\item{nm.sep}{string separating name from value if any}
\item{val.sep}{string separating values}
}
\value{
character string
}
\description{
Compacting a vector of any length with or without names
}
\examples{
sample(seq_len(4), 20, TRUE) |>
as_factor() |>
named_levels() |>
sort() |>
compact_vec()
1:6 |> compact_vec()
"test" |> compact_vec()
sample(letters[1:9], 20, TRUE) |> compact_vec()
}

24
man/create_html_table.Rd Normal file
View file

@ -0,0 +1,24 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/html_styling.R
\name{create_html_table}
\alias{create_html_table}
\title{Create two-column HTML table for data piping in REDCap instruments}
\usage{
create_html_table(text, variable)
}
\arguments{
\item{text}{descriptive text}
\item{variable}{variable to pipe}
}
\value{
character vector
}
\description{
Create two-column HTML table for data piping in REDCap instruments
}
\examples{
create_html_table(text = "Patient ID", variable = c("[cpr]"))
create_html_table(text = paste("assessor", 1:2, sep = "_"), variable = c("[cpr]"))
# create_html_table(text = c("CPR nummer","Word"), variable = c("[cpr][1]", "[cpr][2]", "[test]"))
}

View file

@ -0,0 +1,52 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/export_redcap_instrument.R
\name{create_instrument_meta}
\alias{create_instrument_meta}
\title{DEPRICATED Create zips file with necessary content based on data set}
\usage{
create_instrument_meta(data, dir = here::here(""), record.id = TRUE)
}
\arguments{
\item{data}{metadata for the relevant instrument.
Could be from `ds2dd_detailed()`}
\item{dir}{destination dir for the instrument zip. Default is the current WD.}
\item{record.id}{flag to omit the first row of the data dictionary assuming
this is the record_id field which should not be included in the instrument.
Default is TRUE.}
}
\value{
list
}
\description{
Metadata can be added by editing the data dictionary of a project in the
initial design phase. If you want to later add new instruments, this
function can be used to create (an) instrument(s) to add to a project in
production.
}
\examples{
\dontrun{
data <- iris |>
ds2dd_detailed(
add.auto.id = TRUE,
form.name = sample(c("b", "c"),
size = 6,
replace = TRUE, prob = rep(.5, 2)
)
) |>
purrr::pluck("meta")
# data |> create_instrument_meta()
data <- iris |>
ds2dd_detailed(add.auto.id = FALSE) |>
purrr::pluck("data")
iris |>
setNames(glue::glue("{sample(x = c('a','b'),size = length(ncol(iris)),
replace=TRUE,prob = rep(x=.5,2))}__{names(iris)}")) |>
ds2dd_detailed(form.sep = "__")
data |>
purrr::pluck("meta") |>
create_instrument_meta(record.id = FALSE)
}
}

22
man/cut_string_length.Rd Normal file
View file

@ -0,0 +1,22 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.r
\name{cut_string_length}
\alias{cut_string_length}
\title{Cut string to desired length}
\usage{
cut_string_length(data, l = 100)
}
\arguments{
\item{data}{data}
\item{l}{length}
}
\value{
character string of length l
}
\description{
Cut string to desired length
}
\examples{
"length" |> cut_string_length(l=3)
}

View file

@ -1,23 +0,0 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shiny_cast.R
\name{deploy_shiny}
\alias{deploy_shiny}
\title{Deploy the Shiny app with rsconnect}
\usage{
deploy_shiny(path = here::here("app/"), name.app = "shiny_cast")
}
\arguments{
\item{path}{app folder path}
\item{name.app}{name of deployed app}
}
\value{
deploy
}
\description{
Deploy the Shiny app with rsconnect
}
\examples{
# deploy_shiny
}

80
man/doc2dd.Rd Normal file
View file

@ -0,0 +1,80 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/doc2dd.R
\name{doc2dd}
\alias{doc2dd}
\title{Doc table to data dictionary - EARLY, DOCS MISSING}
\usage{
doc2dd(
data,
instrument.name,
col.variables = 1,
list.datetime.format = list(date_dmy = "_dat[eo]$", time_hh_mm_ss = "_ti[md]e?$"),
col.description = NULL,
col.condition = NULL,
col.subheader = NULL,
subheader.tag = "h2",
condition.minor.sep = ",",
condition.major.sep = ";",
col.calculation = NULL,
col.choices = NULL,
choices.char.sep = "/",
missing.default = NA
)
}
\arguments{
\item{data}{tibble or data.frame with all variable names in one column}
\item{instrument.name}{character vector length one. Instrument name.}
\item{col.variables}{variable names column (default = 1), allows dplyr
subsetting}
\item{list.datetime.format}{formatting for date/time detection.
See `case_match_regex_list()`}
\item{col.description}{descriptions column, allows dplyr
subsetting. If empty, variable names will be used.}
\item{col.condition}{conditions for branching column, allows dplyr
subsetting. See `char2cond()`.}
\item{col.subheader}{sub-header column, allows dplyr subsetting.
See `format_subheader()`.}
\item{subheader.tag}{formatting tag. Default is "h2"}
\item{condition.minor.sep}{condition split minor. See `char2cond()`.
Default is ",".}
\item{condition.major.sep}{condition split major. See `char2cond()`.
Default is ";".}
\item{col.calculation}{calculations column. Has to be written exact.
Character vector.}
\item{col.choices}{choices column. See `char2choice()`.}
\item{choices.char.sep}{choices split. See `char2choice()`. Default is "/".}
\item{missing.default}{value for missing fields. Default is NA.}
}
\value{
tibble or data.frame (same as data)
}
\description{
Works well with `project.aid::docx2list()`.
Allows defining a database in a text document (see provided template) for
an easier to use data base creation. This approach allows easier
collaboration when defining the database. The generic case is a data frame
with variable names as values in a column. This is a format like the REDCap
data dictionary, but gives a few options for formatting.
}
\examples{
# data <- dd_inst
# data |> doc2dd(instrument.name = "evt",
# col.description = 3,
# col.condition = 4,
# col.subheader = 2,
# col.calculation = 5,
# col.choices = 6)
}

View file

@ -1,5 +1,5 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ds2dd.R
% Please edit documentation in R/ds2dd_detailed.R
\name{ds2dd}
\alias{ds2dd}
\title{(DEPRECATED) Data set to data dictionary function}
@ -11,7 +11,7 @@ ds2dd(
field.type = "text",
field.label = NULL,
include.column.names = FALSE,
metadata = metadata_names
metadata = names(REDCapCAST::redcapcast_meta)
)
}
\arguments{
@ -34,7 +34,7 @@ names.}
column names for original data set for upload.}
\item{metadata}{Metadata column names. Default is the included
REDCapCAST::metadata_names.}
names(REDCapCAST::redcapcast_meta).}
}
\value{
data.frame or list of data.frame and vector
@ -49,5 +49,5 @@ Migrated from stRoke ds2dd(). Fits better with the functionality of
}
\examples{
redcapcast_data$record_id <- seq_len(nrow(redcapcast_data))
ds2dd(redcapcast_data, include.column.names=TRUE)
ds2dd(redcapcast_data, include.column.names = TRUE)
}

View file

@ -9,14 +9,14 @@ ds2dd_detailed(
add.auto.id = FALSE,
date.format = "dmy",
form.name = NULL,
form.sep = NULL,
form.prefix = TRUE,
field.type = NULL,
field.label = NULL,
field.label.attr = "label",
field.validation = NULL,
metadata = metadata_names,
validate.time = FALSE,
time.var.sel.pos = "[Tt]i[d(me)]",
time.var.sel.neg = "[Dd]at[eo]"
metadata = names(REDCapCAST::redcapcast_meta),
convert.logicals = FALSE
)
}
\arguments{
@ -30,6 +30,13 @@ dmy.}
\item{form.name}{manually specify form name(s). Vector of length 1 or
ncol(data). Default is NULL and "data" is used.}
\item{form.sep}{If supplied dataset has form names as suffix or prefix to the
column/variable names, the seperator can be specified. If supplied, the
form.name is ignored. Default is NULL.}
\item{form.prefix}{Flag to set if form is prefix (TRUE) or suffix (FALSE) to
the column names. Assumes all columns have pre- or suffix if specified.}
\item{field.type}{manually specify field type(s). Vector of length 1 or
ncol(data). Default is NULL and "text" is used for everything but factors,
which wil get "radio".}
@ -48,15 +55,9 @@ or attribute `factor.labels.attr` for haven_labelled data set (imported .dta
file with `haven::read_dta()`).}
\item{metadata}{redcap metadata headings. Default is
REDCapCAST:::metadata_names.}
names(REDCapCAST::redcapcast_meta).}
\item{validate.time}{Flag to validate guessed time columns}
\item{time.var.sel.pos}{Positive selection regex string passed to
`gues_time_only_filter()` as sel.pos.}
\item{time.var.sel.neg}{Negative selection regex string passed to
`gues_time_only_filter()` as sel.neg.}
\item{convert.logicals}{convert logicals to factor. Default is TRUE.}
}
\value{
list of length 2
@ -74,9 +75,32 @@ Ensure, that the data set is formatted with as much information as possible.
`field.type` can be supplied
}
\examples{
data <- redcapcast_data
data |> ds2dd_detailed(validate.time = TRUE)
data |> ds2dd_detailed()
## Basic parsing with default options
requireNamespace("REDCapCAST")
redcapcast_data |>
dplyr::select(-dplyr::starts_with("redcap_")) |>
ds2dd_detailed()
## Adding a record_id field
iris |> ds2dd_detailed(add.auto.id = TRUE)
mtcars |> ds2dd_detailed(add.auto.id = TRUE)
## Passing form name information to function
iris |>
ds2dd_detailed(
add.auto.id = TRUE,
form.name = sample(c("b", "c"), size = 6, replace = TRUE, prob = rep(.5, 2))
) |>
purrr::pluck("meta")
mtcars |>
dplyr::mutate(unknown = NA) |>
numchar2fct() |>
ds2dd_detailed(add.auto.id = TRUE)
## Using column name suffix to carry form name
data <- iris |>
ds2dd_detailed(add.auto.id = TRUE) |>
purrr::pluck("data")
names(data) <- glue::glue("{sample(x = c('a','b'),size = length(names(data)),
replace=TRUE,prob = rep(x=.5,2))}__{names(data)}")
data |> ds2dd_detailed(form.sep = "__")
}

View file

@ -4,17 +4,31 @@
\alias{easy_redcap}
\title{Secure API key storage and data acquisition in one}
\usage{
easy_redcap(project.name, widen.data = TRUE, uri, ...)
easy_redcap(
project.name,
uri,
raw_or_label = "both",
data_format = c("wide", "list", "redcap", "long"),
widen.data = NULL,
...
)
}
\arguments{
\item{project.name}{The name of the current project (for key storage with
`keyring::key_set()`, using the default keyring)}
\item{widen.data}{argument to widen the exported data}
\link[keyring]{key_set}, using the default keyring)}
\item{uri}{REDCap database API uri}
\item{...}{arguments passed on to `REDCapCAST::read_redcap_tables()`}
\item{raw_or_label}{argument passed on to
\link[REDCapCAST]{read_redcap_tables}. Default is "both" to get labelled
data.}
\item{data_format}{Choose the data}
\item{widen.data}{argument to widen the exported data. [DEPRECATED], use
`data_format`instead}
\item{...}{arguments passed on to \link[REDCapCAST]{read_redcap_tables}.}
}
\value{
data.frame or list depending on widen.data
@ -22,3 +36,8 @@ data.frame or list depending on widen.data
\description{
Secure API key storage and data acquisition in one
}
\examples{
\dontrun{
easy_redcap("My_new_project", fields = c("record_id", "age", "hypertension"))
}
}

View file

@ -0,0 +1,49 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/export_redcap_instrument.R
\name{export_redcap_instrument}
\alias{export_redcap_instrument}
\title{Creates zip-file with necessary content to manually add instrument to database}
\usage{
export_redcap_instrument(data, file, force = FALSE, record.id = "record_id")
}
\arguments{
\item{data}{metadata for the relevant instrument.
Could be from `ds2dd_detailed()`}
\item{file}{destination file name.}
\item{force}{force instrument creation and ignore different form names by
just using the first.}
\item{record.id}{record id variable name. Default is 'record_id'.}
}
\value{
exports zip-file
}
\description{
Metadata can be added by editing the data dictionary of a project in the
initial design phase. If you want to later add new instruments, this
function can be used to create (an) instrument(s) to add to a project in
production.
}
\examples{
# iris |>
# ds2dd_detailed(
# add.auto.id = TRUE,
# form.name = sample(c("b", "c"), size = 6, replace = TRUE, prob = rep(.5, 2))
# ) |>
# purrr::pluck("meta") |>
# (\(.x){
# split(.x, .x$form_name)
# })() |>
# purrr::imap(function(.x, .i){
# export_redcap_instrument(.x,file=here::here(paste0(.i,Sys.Date(),".zip")))
# })
# iris |>
# ds2dd_detailed(
# add.auto.id = TRUE
# ) |>
# purrr::pluck("meta") |>
# export_redcap_instrument(file=here::here(paste0("instrument",Sys.Date(),".zip")))
}

42
man/fct2num.Rd Normal file
View file

@ -0,0 +1,42 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_factor.R
\name{fct2num}
\alias{fct2num}
\title{Allows conversion of factor to numeric values preserving original levels}
\usage{
fct2num(data)
}
\arguments{
\item{data}{vector}
}
\value{
numeric vector
}
\description{
Allows conversion of factor to numeric values preserving original levels
}
\examples{
c(1, 4, 3, "A", 7, 8, 1) |>
as_factor() |>
fct2num()
structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10),
class = "haven_labelled"
) |>
as_factor() |>
fct2num()
structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10),
class = "labelled"
) |>
as_factor() |>
fct2num()
structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10)
) |>
as_factor() |>
fct2num()
}

31
man/fct_drop.Rd Normal file
View file

@ -0,0 +1,31 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fct_drop.R
\name{fct_drop}
\alias{fct_drop}
\alias{fct_drop.data.frame}
\alias{fct_drop.factor}
\title{Drop unused levels preserving label data}
\usage{
fct_drop(x, ...)
\method{fct_drop}{data.frame}(x, ...)
\method{fct_drop}{factor}(x, ...)
}
\arguments{
\item{x}{Factor to drop unused levels}
\item{...}{Other arguments passed down to method.}
}
\description{
This extends [forcats::fct_drop()] to natively work across a data.frame and
replaces [base::droplevels()].
}
\examples{
mtcars |>
numchar2fct() |>
fct_drop()
mtcars |>
numchar2fct() |>
dplyr::mutate(vs = fct_drop(vs))
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.6 KiB

View file

@ -1,8 +1,8 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.r
% Please edit documentation in R/shiny_cast.R
\name{file_extension}
\alias{file_extension}
\title{Helper to import files correctly}
\title{DEPRECATED Helper to import files correctly}
\usage{
file_extension(filenames)
}
@ -13,8 +13,9 @@ file_extension(filenames)
character vector
}
\description{
Helper to import files correctly
DEPRECATED Helper to import files correctly
}
\examples{
file_extension(list.files(here::here(""))[[2]])[[1]]
file_extension(c("file.cd..ks", "file"))
}

View file

@ -0,0 +1,23 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/read_redcap_tables.R
\name{format_redcap_factor}
\alias{format_redcap_factor}
\title{Converts REDCap choices to factor levels and stores in labels attribute}
\usage{
format_redcap_factor(data, meta)
}
\arguments{
\item{data}{vector}
\item{meta}{vector of REDCap choices}
}
\value{
vector of class "labelled" with a "labels" attribute
}
\description{
Applying \link[REDCapCAST]{as_factor} to the data.frame or variable, will
coerce to a factor.
}
\examples{
format_redcap_factor(sample(1:3, 20, TRUE), "1, First. | 2, second | 3, THIRD")
}

22
man/format_subheader.Rd Normal file
View file

@ -0,0 +1,22 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/html_styling.R
\name{format_subheader}
\alias{format_subheader}
\title{Sub-header formatting wrapper}
\usage{
format_subheader(data, tag = "h2")
}
\arguments{
\item{data}{character vector}
\item{tag}{character vector length 1}
}
\value{
character vector
}
\description{
Sub-header formatting wrapper
}
\examples{
"Instrument header" |> format_subheader()
}

View file

@ -4,14 +4,18 @@
\alias{get_api_key}
\title{Retrieve project API key if stored, if not, set and retrieve}
\usage{
get_api_key(key.name)
get_api_key(key.name, ...)
}
\arguments{
\item{key.name}{character vector of key name}
\item{...}{passed to \link[keyring]{key_set}}
}
\value{
character vector
}
\description{
Retrieve project API key if stored, if not, set and retrieve
Attempting to make secure API key storage so simple, that no other way makes
sense. Wrapping \link[keyring]{key_get} and \link[keyring]{key_set} using the
\link[keyring]{key_list} to check if key is in storage already.
}

28
man/get_attr.Rd Normal file
View file

@ -0,0 +1,28 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_factor.R
\name{get_attr}
\alias{get_attr}
\title{Extract attribute. Returns NA if none}
\usage{
get_attr(data, attr = NULL)
}
\arguments{
\item{data}{vector}
\item{attr}{attribute name}
}
\value{
character vector
}
\description{
Extract attribute. Returns NA if none
}
\examples{
attr(mtcars$mpg, "label") <- "testing"
do.call(c, sapply(mtcars, get_attr))
\dontrun{
mtcars |>
numchar2fct(numeric.threshold = 6) |>
ds2dd_detailed()
}
}

33
man/guess_time_only.Rd Normal file
View file

@ -0,0 +1,33 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ds2dd_detailed.R
\name{guess_time_only}
\alias{guess_time_only}
\title{Guess time variables based on naming pattern}
\usage{
guess_time_only(
data,
validate.time = FALSE,
time.var.sel.pos = "[Tt]i[d(me)]",
time.var.sel.neg = "[Dd]at[eo]"
)
}
\arguments{
\item{data}{data.frame or tibble}
\item{validate.time}{Flag to validate guessed time columns}
\item{time.var.sel.pos}{Positive selection regex string passed to
`gues_time_only_filter()` as sel.pos.}
\item{time.var.sel.neg}{Negative selection regex string passed to
`gues_time_only_filter()` as sel.neg.}
}
\value{
data.frame or tibble
}
\description{
This is for repairing data with time variables with appended "1970-01-01"
}
\examples{
redcapcast_data |> guess_time_only(validate.time = TRUE)
}

26
man/haven_all_levels.Rd Normal file
View file

@ -0,0 +1,26 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_factor.R
\name{haven_all_levels}
\alias{haven_all_levels}
\title{Finish incomplete haven attributes substituting missings with values}
\usage{
haven_all_levels(data)
}
\arguments{
\item{data}{haven labelled variable}
}
\value{
named vector
}
\description{
Finish incomplete haven attributes substituting missings with values
}
\examples{
ds <- structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10),
class = "haven_labelled"
)
haven::is.labelled(ds)
attributes(ds)
ds |> haven_all_levels()
}

25
man/html_tag_wrap.Rd Normal file
View file

@ -0,0 +1,25 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/html_styling.R
\name{html_tag_wrap}
\alias{html_tag_wrap}
\title{Simple html tag wrapping for REDCap text formatting}
\usage{
html_tag_wrap(data, tag = "h2", extra = NULL)
}
\arguments{
\item{data}{character vector}
\item{tag}{character vector length 1}
\item{extra}{character vector}
}
\value{
character vector
}
\description{
Simple html tag wrapping for REDCap text formatting
}
\examples{
html_tag_wrap("Titel", tag = "div", extra = 'class="rich-text-field-label"')
html_tag_wrap("Titel", tag = "h2")
}

25
man/is.labelled.Rd Normal file
View file

@ -0,0 +1,25 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_factor.R
\name{is.labelled}
\alias{is.labelled}
\title{Tests for multiple label classes}
\usage{
is.labelled(x, classes = c("haven_labelled", "labelled"))
}
\arguments{
\item{x}{data}
\item{classes}{classes to test}
}
\value{
logical
}
\description{
Tests for multiple label classes
}
\examples{
structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10),
class = "haven_labelled"
) |> is.labelled()
}

19
man/is_missing.Rd Normal file
View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/doc2dd.R
\name{is_missing}
\alias{is_missing}
\title{Multi missing check}
\usage{
is_missing(data, nas = c("", "NA"))
}
\arguments{
\item{data}{character vector}
\item{nas}{character vector of strings considered as NA}
}
\value{
logical vector
}
\description{
Multi missing check
}

46
man/named_levels.Rd Normal file
View file

@ -0,0 +1,46 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_factor.R
\name{named_levels}
\alias{named_levels}
\title{Get named vector of factor levels and values}
\usage{
named_levels(
data,
label = "labels",
na.label = NULL,
na.value = 99,
sort.numeric = TRUE
)
}
\arguments{
\item{data}{factor}
\item{label}{character string of attribute with named vector of factor labels}
\item{na.label}{character string to refactor NA values. Default is NULL.}
\item{na.value}{new value for NA strings. Ignored if na.label is NULL.
Default is 99.}
\item{sort.numeric}{sort factor levels if levels are numeric. Default is TRUE}
}
\value{
named vector
}
\description{
Get named vector of factor levels and values
}
\examples{
structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10),
class = "haven_labelled"
) |>
as_factor() |>
named_levels()
structure(c(1, 2, 3, 2, 10, 9),
labels = c(Unknown = 9, Refused = 10),
class = "labelled"
) |>
as_factor() |>
named_levels()
}

14
man/nav_bar_page.Rd Normal file
View file

@ -0,0 +1,14 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shiny_cast.R
\name{nav_bar_page}
\alias{nav_bar_page}
\title{Nav_bar defining function for shiny ui}
\usage{
nav_bar_page()
}
\value{
shiny object
}
\description{
Nav_bar defining function for shiny ui
}

31
man/numchar2fct.Rd Normal file
View file

@ -0,0 +1,31 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ds2dd_detailed.R
\name{numchar2fct}
\alias{numchar2fct}
\title{Applying var2fct across data set}
\usage{
numchar2fct(data, numeric.threshold = 6, character.throshold = 6)
}
\arguments{
\item{data}{dataset. data.frame or tibble}
\item{numeric.threshold}{threshold for var2fct for numeric columns. Default
is 6.}
\item{character.throshold}{threshold for var2fct for character columns.
Default is 6.}
}
\value{
data.frame or tibble
}
\description{
Individual thresholds for character and numeric columns
}
\examples{
mtcars |> str()
\dontrun{
mtcars |>
numchar2fct(numeric.threshold = 6) |>
str()
}
}

39
man/parse_data.Rd Normal file
View file

@ -0,0 +1,39 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ds2dd_detailed.R
\name{parse_data}
\alias{parse_data}
\title{Helper to auto-parse un-formatted data with haven and readr}
\usage{
parse_data(
data,
guess_type = TRUE,
col_types = NULL,
locale = readr::default_locale(),
ignore.vars = "cpr",
...
)
}
\arguments{
\item{data}{data.frame or tibble}
\item{guess_type}{logical to guess type with readr}
\item{col_types}{specify col_types using readr semantics. Ignored if guess_type is TRUE}
\item{locale}{option to specify locale. Defaults to readr::default_locale().}
\item{ignore.vars}{specify column names of columns to ignore when parsing}
\item{...}{ignored}
}
\value{
data.frame or tibble
}
\description{
Helper to auto-parse un-formatted data with haven and readr
}
\examples{
mtcars |>
parse_data() |>
str()
}

23
man/possibly_numeric.Rd Normal file
View file

@ -0,0 +1,23 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_factor.R
\name{possibly_numeric}
\alias{possibly_numeric}
\title{Tests if vector can be interpreted as numeric without introducing NAs by
coercion}
\usage{
possibly_numeric(data)
}
\arguments{
\item{data}{vector}
}
\value{
logical
}
\description{
Tests if vector can be interpreted as numeric without introducing NAs by
coercion
}
\examples{
c("1","5") |> possibly_numeric()
c("1","5","e") |> possibly_numeric()
}

24
man/possibly_roman.Rd Normal file
View file

@ -0,0 +1,24 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/as_factor.R
\name{possibly_roman}
\alias{possibly_roman}
\title{Test if vector can be interpreted as roman numerals}
\usage{
possibly_roman(data)
}
\arguments{
\item{data}{character vector}
}
\value{
logical
}
\description{
Test if vector can be interpreted as roman numerals
}
\examples{
sample(1:100, 10) |>
as.roman() |>
possibly_roman()
sample(c(TRUE, FALSE), 10, TRUE) |> possibly_roman()
rep(NA, 10) |> possibly_roman()
}

17
man/process_user_input.Rd Normal file
View file

@ -0,0 +1,17 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/process_user_input.r
\name{process_user_input}
\alias{process_user_input}
\title{User input processing}
\usage{
process_user_input(x)
}
\arguments{
\item{x}{input}
}
\value{
processed input
}
\description{
User input processing
}

View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/process_user_input.r
\name{process_user_input.character}
\alias{process_user_input.character}
\title{User input processing character}
\usage{
\method{process_user_input}{character}(x, ...)
}
\arguments{
\item{x}{input}
\item{...}{ignored}
}
\value{
processed input
}
\description{
User input processing character
}

View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/process_user_input.r
\name{process_user_input.data.frame}
\alias{process_user_input.data.frame}
\title{User input processing data.frame}
\usage{
\method{process_user_input}{data.frame}(x, ...)
}
\arguments{
\item{x}{input}
\item{...}{ignored}
}
\value{
processed input
}
\description{
User input processing data.frame
}

View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/process_user_input.r
\name{process_user_input.default}
\alias{process_user_input.default}
\title{User input processing default}
\usage{
\method{process_user_input}{default}(x, ...)
}
\arguments{
\item{x}{input}
\item{...}{ignored}
}
\value{
processed input
}
\description{
User input processing default
}

View file

@ -0,0 +1,19 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/process_user_input.r
\name{process_user_input.response}
\alias{process_user_input.response}
\title{User input processing response}
\usage{
\method{process_user_input}{response}(x, ...)
}
\arguments{
\item{x}{input}
\item{...}{ignored}
}
\value{
processed input
}
\description{
User input processing response
}

View file

@ -1,5 +1,5 @@
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.r
% Please edit documentation in R/shiny_cast.R
\name{read_input}
\alias{read_input}
\title{Flexible file import based on extension}

View file

@ -11,8 +11,9 @@ read_redcap_tables(
fields = NULL,
events = NULL,
forms = NULL,
raw_or_label = "label",
split_forms = "all"
raw_or_label = c("raw", "label", "both"),
split_forms = c("all", "repeating", "none"),
...
)
}
\arguments{
@ -28,20 +29,32 @@ read_redcap_tables(
\item{forms}{forms to download}
\item{raw_or_label}{raw or label tags}
\item{raw_or_label}{raw or label tags. Can be "raw", "label" or "both".
* "raw": Standard \link[REDCapR]{redcap_read} method to get raw values.
* "label": Standard \link[REDCapR]{redcap_read} method to get label values.
* "both": Get raw values with REDCap labels applied as labels. Use
\link[REDCapCAST]{as_factor} to format factors with original labels and use
the `gtsummary` package functions like \link[gtsummary]{tbl_summary} to
easily get beautiful tables with original labels from REDCap. Use
\link[REDCapCAST]{fct_drop} to drop empty levels.}
\item{split_forms}{Whether to split "repeating" or "all" forms, default is
all.}
all. Give "none" to export native semi-long REDCap format}
\item{...}{passed on to \link[REDCapR]{redcap_read}}
}
\value{
list of instruments
}
\description{
Implementation of REDCap_split with a focused data acquisition approach using
REDCapR::redcap_read and only downloading specified fields, forms and/or
events using the built-in focused_metadata including some clean-up.
Implementation of passed on to \link[REDCapCAST]{REDCap_split} with a focused
data acquisition approach using passed on to \link[REDCapR]{redcap_read} and
only downloading specified fields, forms and/or events using the built-in
focused_metadata including some clean-up.
Works with classical and longitudinal projects with or without repeating
instruments.
Will preserve metadata in the data.frames as labels.
}
\examples{
# Examples will be provided later

Some files were not shown because too many files have changed in this diff Show more