Skip to content

Commit

Permalink
14 testing implement tests for full coverage (#128)
Browse files Browse the repository at this point in the history
* Merge latest updates (#124)

* Update nvecs to use tenmat.

* Full implementation of collapse. Required implementation of tensor.from_tensor_type for tenmat objects. Updated tensor tests. (#32)

* Update __init__.py

Bump version.

* Create CHANGELOG.md

Changelog update

* Update CHANGELOG.md

Consistent formatting

* Update CHANGELOG.md

Correction

* Create ci-tests.yml

* Update README.md

Adding coverage statistics from coveralls.io

* Create requirements.txt

* 33 use standard license (#34)

* Use standard, correctly formatted LICENSE

* Delete LICENSE

* Create LICENSE

* Update and rename ci-tests.yml to regression-tests.yml

* Update README.md

* Fix bug in tensor.mttkrp that only showed up when ndims > 3. (#36)

* Update __init__.py

Bump version

* Bump version

* Adding files to support pypi dist creation and uploading

* Fix PyPi installs. Bump version.

* Fixing np.reshape usage. Adding more tests for tensor.ttv. (#38)

* Fixing issues with np.reshape; requires order='F' to align with Matlab functionality. (#39)

Closes #30 .

* Bump version.

* Adding tensor.ttm. Adding use case in tenmat to support ttm testing. (#40)

Closes #27

* Bump version

* Format CHANGELOG

* Update CHANGELOG.md

* pypi puslishing action on release

* Allowing rdims or cdims to be empty array. (#43)

Closes #42

* Adding  tensor.ttt implementation. (#44)

Closes 28

* Bump version

* Implement ktensor.score and associated tests.

* Changes to supporting pyttb data classes and associated tests to enable ktensor.score.

* Bump version.

* Compatibility with numpy 1.24.x (#49)

Close #48 

* Replace "numpy.float" with equivalent "float"

numpy.float was deprecated in 1.20 and removed in 1.24

* sptensor.ttv: support 'vector' being a plain list

(rather than just numpy.ndarray). Backwards compatible - an ndarray
argument still works. This is because in newer numpy, it's not allowed to do
np.array(list) where the elements of list are ndarrays of different shapes.

* Make ktensor.innerprod call ttv with 'vector' as plain list

(instead of numpy.ndarray, because newer versions don't allow ragged arrays)

* tensor.ttv: avoid ragged numpy arrays

* Fix two unit test failures due to numpy related changes

* More numpy updates

- numpy.int is removed - use int instead
- don't try to construct ragged/inhomogeneous numpy arrays in tests.
  Use plain lists of vectors instead

* Fix typo in assert message

* Let ttb.tt_dimscheck catch empty input error

In the three ttv methods, ttb.tt_dimscheck checks that 'vector' argument
is not an empty list/ndarray. Revert previous changes that checked for this
before calling tt_dimscheck.

* Bump version

* TENSOR: Fix slices ref shen return value isn't scalar or vector. #41 (#50)

Closes #41

* Ttensor implementation (#51)

* TENSOR: Fix slices ref shen return value isn't scalar or vector. #41

* TTENSOR: Add tensor creation (partial support of core tensor types) and display

* SPTENSOR: Add numpy scalar type for multiplication filter.

* TTENSOR: Double, full, isequal, mtimes, ndims, size, uminus, uplus, and partial innerprod.

* TTENSOR: TTV (finishes innerprod), mttkrp, and norm

* TTENSOR: TTM, permute and minor cleanup.

* TTENSOR: Reconstruct

* TTENSOR: Nvecs

* SPTENSOR:
* Fix argument mismatch for ttm (modes s.b. dims)
* Fix ttm for rectangular matrices
* Make error message consitent with tensor
TENSOR:
* Fix error message

* TTENSOR: Improve test coverage and corresponding bug fixes discovered.

* Test coverage (#52)

* SPTENSOR:
* Fix argument mismatch for ttm (modes s.b. dims)
* Fix ttm for rectangular matrices
* Make error message consitent with tensor
TENSOR:
* Fix error message

* SPTENSOR: Improve test coverage, replace prints, and some doc string fixes.

* PYTTUB_UTILS: Improve test coverage

* TENMAT: Remove impossible condition. Shape is a property, the property handles the (0,) shape condition. So ndims should never see it.

* TENSOR: Improve test coverage. One line left, but logic of setitem is unclear without MATLAB validation of behavior.

* CP_APR: Add tests fpr sptensor, and corresponding bug fixes to improve test coverage.

---------

Co-authored-by: Danny Dunlavy <dmdunla@sandia.gov>

* Bump version

* TUCKER_ALS: Add tucker_als to validate ttucker implementation. (#53)

* Bump version of actions (#55)

actions/setup-python@v4 to avoid deprecation warnings

* Tensor docs plus Linting and Typing and Black oh my (#54)

* TENSOR: Apply black and enforce it

* TENSOR: Add isort and pylint. Fix to pass then enforce

* TENSOR: Variety of linked fixes:
* Add mypy type checking
* Update infrastructure for validating package
* Fix doc tests and add more examples

* DOCTEST: Add doctest automatically to regression
* Fix existing failures

* DOCTEST: Fix non-uniform array

* DOCTEST: Fix precision errors in example

* AUTOMATION: Add test directory otherwise only doctests run

* TENSOR: Fix bad rebase from numpy fix

* Auto formatting (#60)

* COVERAGE: Fix some coverage regressions from pylint PR

* ISORT: Run isort on source and tests

* BLACK: Run black on source and tests

* BLACK: Run black on source and tests

* FORMATTING: Add tests and verification for autoformatting

* FORMATTING: Add black/isort to root to simplify

* Add preliminary contributor guide instructions

Closes #59

* TUCKER_ALS: TTM with negative values is broken in ttensor (#62) (#66)

* Replace usage in tucker_als
* Update test for tucker_als to ensure result matches expectation
* Add early error handling in ttensor ttm for negative dims

* Hosvd (#67)

* HOSVD: Preliminary outline of core functionality

* HOSVD: Fix numeric bug
* Was slicing incorrectly
* Update test to check convergence

* HOSVD: Finish output and test coverage

* TENSOR: Prune numbers real
* Real and mypy don't play nice python/mypy#3186
* This allows partial typing support of HOSVD

* Add test that matches TTB for MATLAB output of HOSVD (#79)

This closes #78

* Bump version (#81)

Closes #80

* Lint pyttb_utils and lint/type sptensor (#77)

* PYTTB_UTILS: Fix and enforce pylint

* PYTTB_UTILS: Pull out utility only used internally in sptensor

* SPTENSOR: Fix and enforce pylint

* SPTENSOR: Initial pass a typing support

* SPTENSOR: Complete initial typing coverage

* SPTENSOR: Fix test coverage from typing changes.

* PYLINT: Update test to lint files in parallel to improve dev experience.

* HOSVD: Negative signs can be permuted for equivalent decomposition (#82)

* Pre commit (#83)

* Setup and pyproject are redundant. Remove and resolve install issue

* Try adding pre-commit hooks

* Update Makefile for simplicity and add notes to contributor guide.

* Make pre-commit optional opt-in

* Make regression tests use simplified dependencies so we track fewer places.

* Using dynamic version in pyproject.toml to reduce places where version is set. (#86)

* Adding shell=True to subprocess.run() calls (#87)

* Adding Nick to authors (#89)

* Release prep (#90)

* Fix author for PyPI. Bump to dev version.

* Exclude dims (#91)

* Explicit Exclude_dims:
* Updated tt_dimscheck
* Update all uses of tt_dimscheck and propagate interface

* Add test coverage for exclude dims changes

* Tucker_als: Fix workaround that motivated exclude_dims

* Bump version

* Spelling

* Tensor generator helpers (#93)

* TENONES: Add initial tenones support

* TENZEROS: Add initial tenzeros support

* TENDIAG: Add initial tendiag support

* SPTENDIAG: Add initial sptendiag support

* Link in autodocumentation for recently added code: (#98)

* TTENSOR, HOSVD, TUCKER_ALS, Tensor generators

* Remove warning for nvecs: (#99)

* Make debug level log for now
* Remove test enforcement

* Rand generators (#100)

* Non-functional change:
* Fix numpy deprecation warning, logic should be equivalent

* Tenrand initial implementation

* Sptenrand initial implementation

* Complete pass on ktensor docs. (#101)

* Bump version

* Bump version

* Trying to fix coveralls

* Trying coveralls github action

* Fixing arrange and normalize. (#103)

* Fixing arrange and normalize.

* Merge main (#104)

* Trying to fix coveralls

* Trying coveralls github action

* Rename contributor guide for github magic (#106)

* Rename contributor guide for github magic

* Update reference to contributor guide from README

* Fixed the mean and stdev typo for cp_als (#117)

* Changed cp_als() param 'tensor' to 'input_tensor' to avoid ambiguity (#118)

* Changed cp_als() param 'tensor' to 'input_tensor' to avoid ambiguity

* Formatted changes with isort and black.

* Updated all `tensor`-named paramteres to `input_tensor`, including in docs (#120)

* Tensor growth (#109)

* Tensor.__setitem__: Break into methods
* Non-functional change to make logic flow clearer

* Tensor.__setitem__: Fix some types to resolve edge cases

* Sptensor.__setitem__: Break into methods
* Non-functional change to make flow clearer

* Sptensor.__setitem__: Catch additional edge cases in sptensor indexing

* Tensor.__setitem__: Catch subtensor additional dim growth

* Tensor indexing (#116)

* Tensor.__setitem__/__getitem__: Fix linear index
* Before required numpy array now works on value/slice/Iterable

* Tensor.__getitem__: Fix subscripts usage
* Consistent with setitem now
* Update usages (primarily in sptensor)

* Sptensor.__setitem__/__getitem__: Fix subscripts usage
* Consistent with tensor and MATLAB now
* Update test usage

* sptensor: Add coverage for improved indexing capability

* tensor: Add coverage for improved indexing capability

---------

Co-authored-by: brian-kelley <brian.honda11@gmail.com>
Co-authored-by: ntjohnson1 <24689722+ntjohnson1@users.noreply.github.com>
Co-authored-by: Dunlavy <dmdunla@s1075069.srn.sandia.gov>
Co-authored-by: DeepBlockDeepak <43120318+DeepBlockDeepak@users.noreply.github.com>

* Adding tests and data for import_data, export_data, sptensor, ktensor. Small changes in code that was unreachable.

* Updating formatting with black

* More updates for coverage.

* Black formatting updates

* Update regression-tests.yml

Adding verbose to black and isort calls

* Black updated locally to align with CI testing

* Update regression-tests.yml

---------

Co-authored-by: brian-kelley <brian.honda11@gmail.com>
Co-authored-by: ntjohnson1 <24689722+ntjohnson1@users.noreply.github.com>
Co-authored-by: Dunlavy <dmdunla@s1075069.srn.sandia.gov>
Co-authored-by: DeepBlockDeepak <43120318+DeepBlockDeepak@users.noreply.github.com>
  • Loading branch information
5 people committed Jun 3, 2023
1 parent d70a102 commit 7d4fb7f
Show file tree
Hide file tree
Showing 9 changed files with 154 additions and 59 deletions.
79 changes: 42 additions & 37 deletions pyttb/cp_apr.py
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ def tt_cp_apr_mu(
kktModeViolations = np.zeros((N,))

if printitn > 0:
print("\nCP_APR:\n")
print("CP_APR:")

# Start the wall clock timer.
start = time.time()
Expand Down Expand Up @@ -304,7 +304,7 @@ def tt_cp_apr_mu(
# Print status
if printinneritn != 0 and divmod(i, printinneritn)[1] == 0:
print(
"\t\tMode = {}, Inner Iter = {}, KKT violation = {}\n".format(
"\t\tMode = {}, Inner Iter = {}, KKT violation = {}".format(
n, i, kktModeViolations[n]
)
)
Expand All @@ -325,11 +325,11 @@ def tt_cp_apr_mu(
# Check for convergence
if isConverged:
if printitn > 0:
print("Exiting because all subproblems reached KKT tol.\n")
print("Exiting because all subproblems reached KKT tol.")
break
if nTimes[iter] > stoptime:
if printitn > 0:
print("Exiting because time limit exceeded.\n")
print("Exiting because time limit exceeded.")
break

t_stop = time.time() - start
Expand All @@ -345,12 +345,12 @@ def tt_cp_apr_mu(
normTensor**2 + M.norm() ** 2 - 2 * input_tensor.innerprod(M)
)
fit = 1 - (normresidual / normTensor) # fraction explained by model
print("===========================================\n")
print(" Final log-likelihood = {} \n".format(obj))
print(" Final least squares fit = {} \n".format(fit))
print(" Final KKT violation = {}\n".format(kktViolations[iter]))
print(" Total inner iterations = {}\n".format(sum(nInnerIters)))
print(" Total execution time = {} secs\n".format(t_stop))
print("===========================================")
print(" Final log-likelihood = {}".format(obj))
print(" Final least squares fit = {}".format(fit))
print(" Final KKT violation = {}".format(kktViolations[iter]))
print(" Total inner iterations = {}".format(sum(nInnerIters)))
print(" Total execution time = {} secs".format(t_stop))

output = {}
output["params"] = (
Expand Down Expand Up @@ -472,7 +472,7 @@ def tt_cp_apr_pdnr(
times = np.zeros((maxiters, 1))

if printitn > 0:
print("\nCP_PDNR (alternating Poisson regression using damped Newton)\n")
print("CP_PDNR (alternating Poisson regression using damped Newton)")

dispLineWarn = printinneritn > 0

Expand All @@ -493,7 +493,7 @@ def tt_cp_apr_pdnr(
sparseIx.append(row_indices)

if printitn > 0:
print("done\n")
print("done")

e_vec = np.ones((1, rank))

Expand Down Expand Up @@ -578,13 +578,16 @@ def tt_cp_apr_pdnr(
kktModeViolations[n] = kkt_violation

if printinneritn > 0 and np.mod(i, printinneritn) == 0:
print("\tMode = {}, Row = {}, InnerIt = {}".format(n, jj, i))
print(
"\tMode = {}, Row = {}, InnerIt = {}".format(n, jj, i),
end="",
)

if i == 0:
print(", RowKKT = {}\n".format(kkt_violation))
print(", RowKKT = {}".format(kkt_violation))
else:
print(
", RowKKT = {}, RowObj = {}\n".format(
", RowKKT = {}, RowObj = {}".format(
kkt_violation, -f_new
)
)
Expand Down Expand Up @@ -667,7 +670,7 @@ def tt_cp_apr_pdnr(
if printitn > 0 and np.mod(iter, printitn) == 0:
fnVals[iter] = -tt_loglikelihood(input_tensor, M)
print(
"{}. Ttl Inner Its: {}, KKT viol = {}, obj = {}, nz: {}\n".format(
"{}. Ttl Inner Its: {}, KKT viol = {}, obj = {}, nz: {}".format(
iter,
nInnerIters[iter],
kktViolations[iter],
Expand All @@ -684,7 +687,7 @@ def tt_cp_apr_pdnr(
if isConverged and inexact and rowsubprobStopTol <= stoptol:
break
if times[iter] > stoptime:
print("EXiting because time limit exceeded\n")
print("EXiting because time limit exceeded")
break

t_stop = time.time() - start
Expand All @@ -700,12 +703,12 @@ def tt_cp_apr_pdnr(
normTensor**2 + M.norm() ** 2 - 2 * input_tensor.innerprod(M)
)
fit = 1 - (normresidual / normTensor) # fraction explained by model
print("===========================================\n")
print(" Final log-likelihood = {} \n".format(obj))
print(" Final least squares fit = {} \n".format(fit))
print(" Final KKT violation = {}\n".format(kktViolations[iter]))
print(" Total inner iterations = {}\n".format(sum(nInnerIters)))
print(" Total execution time = {} secs\n".format(t_stop))
print("===========================================")
print(" Final log-likelihood = {}".format(obj))
print(" Final least squares fit = {}".format(fit))
print(" Final KKT violation = {}".format(kktViolations[iter]))
print(" Total inner iterations = {}".format(sum(nInnerIters)))
print(" Total execution time = {} secs".format(t_stop))

output = {}
output["params"] = (
Expand Down Expand Up @@ -840,7 +843,7 @@ def tt_cp_apr_pqnr(
times = np.zeros((maxiters, 1))

if printitn > 0:
print("\nCP_PQNR (alternating Poisson regression using quasi-Newton)\n")
print("CP_PQNR (alternating Poisson regression using quasi-Newton)")

dispLineWarn = printinneritn > 0

Expand All @@ -861,7 +864,7 @@ def tt_cp_apr_pqnr(
sparseIx.append(row_indices)

if printitn > 0:
print("done\n")
print("done")

# Main loop: iterate until convergence or a max threshold is reached
for iter in range(maxiters):
Expand Down Expand Up @@ -958,20 +961,22 @@ def tt_cp_apr_pqnr(

# We now use \| KKT \|_{inf}:
kkt_violation = np.max(np.abs(np.minimum(m_row, gradM)))
# print("Intermediate Printing m_row: {}\n and gradM{}".format(m_row, gradM))

# Report largest row subproblem initial violation
if i == 0 and kkt_violation > kktModeViolations[n]:
kktModeViolations[n] = kkt_violation

if printinneritn > 0 and np.mod(i, printinneritn) == 0:
print("\tMode = {}, Row = {}, InnerIt = {}".format(n, jj, i))
print(
"\tMode = {}, Row = {}, InnerIt = {}".format(n, jj, i),
end="",
)

if i == 0:
print(", RowKKT = {}\n".format(kkt_violation))
print(", RowKKT = {}".format(kkt_violation))
else:
print(
", RowKKT = {}, RowObj = {}\n".format(
", RowKKT = {}, RowObj = {}".format(
kkt_violation, -f_new
)
)
Expand Down Expand Up @@ -1075,7 +1080,7 @@ def tt_cp_apr_pqnr(
if printitn > 0 and np.mod(iter, printitn) == 0:
fnVals[iter] = -tt_loglikelihood(input_tensor, M)
print(
"{}. Ttl Inner Its: {}, KKT viol = {}, obj = {}, nz: {}\n".format(
"{}. Ttl Inner Its: {}, KKT viol = {}, obj = {}, nz: {}".format(
iter, nInnerIters[iter], kktViolations[iter], fnVals[iter], num_zero
)
)
Expand All @@ -1086,7 +1091,7 @@ def tt_cp_apr_pqnr(
if isConverged:
break
if times[iter] > stoptime:
print("Exiting because time limit exceeded\n")
print("Exiting because time limit exceeded")
break

t_stop = time.time() - start
Expand All @@ -1102,12 +1107,12 @@ def tt_cp_apr_pqnr(
normTensor**2 + M.norm() ** 2 - 2 * input_tensor.innerprod(M)
)
fit = 1 - (normresidual / normTensor) # fraction explained by model
print("===========================================\n")
print(" Final log-likelihood = {} \n".format(obj))
print(" Final least squares fit = {} \n".format(fit))
print(" Final KKT violation = {}\n".format(kktViolations[iter]))
print(" Total inner iterations = {}\n".format(sum(nInnerIters)))
print(" Total execution time = {} secs\n".format(t_stop))
print("===========================================")
print(" Final log-likelihood = {}".format(obj))
print(" Final least squares fit = {}".format(fit))
print(" Final KKT violation = {}".format(kktViolations[iter]))
print(" Total inner iterations = {}".format(sum(nInnerIters)))
print(" Total execution time = {} secs".format(t_stop))

output = {}
output["params"] = (
Expand Down
6 changes: 3 additions & 3 deletions pyttb/export_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,9 @@ def export_data(data, filename, fmt_data=None, fmt_weights=None):
"""
Export tensor-related data to a file.
"""
if not isinstance(data, (ttb.tensor, ttb.sptensor, ttb.ktensor, np.ndarray)):
assert False, f"Invalid data type for export: {type(data)}"

# open file
fp = open(filename, "w")

Expand Down Expand Up @@ -54,9 +57,6 @@ def export_data(data, filename, fmt_data=None, fmt_weights=None):
export_size(fp, data.shape)
export_array(fp, data, fmt_data)

else:
assert False, "Invalid data type for export"


def export_size(fp, shape):
# Export the size of something to a file
Expand Down
8 changes: 5 additions & 3 deletions pyttb/import_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,23 +24,27 @@ def import_data(filename):
data_type = import_type(fp)

if data_type not in ["tensor", "sptensor", "matrix", "ktensor"]:
fp.close()
assert False, f"Invalid data type found: {data_type}"

if data_type == "tensor":
shape = import_shape(fp)
data = import_array(fp, np.prod(shape))
fp.close()
return ttb.tensor().from_data(data, shape)

elif data_type == "sptensor":
shape = import_shape(fp)
nz = import_nnz(fp)
subs, vals = import_sparse_array(fp, len(shape), nz)
fp.close()
return ttb.sptensor().from_data(subs, vals, shape)

elif data_type == "matrix":
shape = import_shape(fp)
mat = import_array(fp, np.prod(shape))
mat = np.reshape(mat, np.array(shape))
fp.close()
return mat

elif data_type == "ktensor":
Expand All @@ -54,11 +58,9 @@ def import_data(filename):
fac = import_array(fp, np.prod(fac_shape))
fac = np.reshape(fac, np.array(fac_shape))
factor_matrices.append(fac)
fp.close()
return ttb.ktensor().from_data(weights, factor_matrices)

# Close file
fp.close()


def import_type(fp):
# Import IO data type
Expand Down
16 changes: 3 additions & 13 deletions pyttb/sptensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -671,14 +671,11 @@ def logical_and(self, B: Union[float, sptensor, ttb.tensor]) -> sptensor:
if not self.shape == B.shape:
assert False, "Must be tensors of the same shape"

def is_length_2(x):
return len(x) == 2

C = sptensor.from_aggregator(
np.vstack((self.subs, B.subs)),
np.vstack((self.vals, B.vals)),
self.shape,
is_length_2,
lambda x: len(x) == 2,
)

return C
Expand Down Expand Up @@ -735,15 +732,11 @@ def logical_or(
assert False, "Logical Or requires tensors of the same size"

if isinstance(B, ttb.sptensor):

def is_length_ge_1(x):
return len(x) >= 1

return sptensor.from_aggregator(
np.vstack((self.subs, B.subs)),
np.ones((self.subs.shape[0] + B.subs.shape[0], 1)),
self.shape,
is_length_ge_1,
lambda x: len(x) >= 1,
)

assert False, "Sptensor Logical Or argument must be scalar or sptensor"
Expand Down Expand Up @@ -780,12 +773,9 @@ def logical_xor(
if self.shape != other.shape:
assert False, "Logical XOR requires tensors of the same size"

def length1(x):
return len(x) == 1

subs = np.vstack((self.subs, other.subs))
return ttb.sptensor.from_aggregator(
subs, np.ones((len(subs), 1)), self.shape, length1
subs, np.ones((len(subs), 1)), self.shape, lambda x: len(x) == 1
)

assert False, "The argument must be an sptensor, tensor or scalar"
Expand Down
11 changes: 11 additions & 0 deletions tests/data/invalid_dims.tns
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
matrix
2
4 2 1
1.0000000000000000e+00
5.0000000000000000e+00
2.0000000000000000e+00
6.0000000000000000e+00
3.0000000000000000e+00
7.0000000000000000e+00
4.0000000000000000e+00
8.0000000000000000e+00
11 changes: 11 additions & 0 deletions tests/data/invalid_type.tns
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
list
2
4 2
1.0000000000000000e+00
5.0000000000000000e+00
2.0000000000000000e+00
6.0000000000000000e+00
3.0000000000000000e+00
7.0000000000000000e+00
4.0000000000000000e+00
8.0000000000000000e+00
8 changes: 5 additions & 3 deletions tests/test_cp_apr.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ def test_cpapr_mu(capsys):
ktensorInstance = ttb.ktensor.from_data(weights, factor_matrices)
tensorInstance = ktensorInstance.full()
np.random.seed(123)
M, _, _ = ttb.cp_apr(tensorInstance, 2)
M, _, _ = ttb.cp_apr(tensorInstance, 2, printinneritn=1)
# Consume the cp_apr diagnostic printing
capsys.readouterr()
assert np.isclose(M.full().data, ktensorInstance.full().data).all()
Expand All @@ -175,7 +175,9 @@ def test_cpapr_pdnr(capsys):
ktensorInstance = ttb.ktensor.from_data(weights, factor_matrices)
tensorInstance = ktensorInstance.full()
np.random.seed(123)
M, _, _ = ttb.cp_apr(tensorInstance, 2, algorithm="pdnr")
M, _, _ = ttb.cp_apr(
tensorInstance, 2, algorithm="pdnr", printinneritn=1, inexact=False
)
capsys.readouterr()
assert np.isclose(M.full().data, ktensorInstance.full().data, rtol=1e-04).all()

Expand Down Expand Up @@ -221,7 +223,7 @@ def test_cpapr_pqnr(capsys):
ktensorInstance = ttb.ktensor.from_data(weights, factor_matrices)
tensorInstance = ktensorInstance.full()
np.random.seed(123)
M, _, _ = ttb.cp_apr(tensorInstance, 2, algorithm="pqnr")
M, _, _ = ttb.cp_apr(tensorInstance, 2, algorithm="pqnr", printinneritn=1)
capsys.readouterr()
assert np.isclose(M.full().data, ktensorInstance.full().data, rtol=1e-01).all()

Expand Down

0 comments on commit 7d4fb7f

Please sign in to comment.