Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade of Minio appears to have lost all data #5237

Closed
gavinmcnair opened this issue Nov 27, 2017 · 27 comments · Fixed by #5364
Closed

Upgrade of Minio appears to have lost all data #5237

gavinmcnair opened this issue Nov 27, 2017 · 27 comments · Fixed by #5364
Assignees

Comments

@gavinmcnair
Copy link

gavinmcnair commented Nov 27, 2017

Expected Behavior

Expected data to be available after upgrade

Current Behavior

All servers show the following errors

time="2017-11-27T00:50:52Z" level=error msg="Unable to fetch disk info for &cmd.retryStorage{remoteStorage:(*cmd.networkStorage)(0xc4202ea478), maxRetryAttempts:1, retryUnit:1000000000, retryCap:30000000000, offline:false, offlineTimestamp:time.Time{wall:0x19a62118, ext:63647340634, loc:(*time.Location)(nil)}}" cause="disk not found" source="[xl-v1.go:199:getDisksInfo()]" 
time="2017-11-27T00:51:24Z" level=fatal msg="Unable to initialize XL object layer." cause="all stale disks had write errors during healing: " source="[xl-v1.go:74:newXLObjectLayer()]" stack="erasure-healfile.go:147:ErasureStorage.HealFile xl-v1-healing.go:469:healObject xl-v1-healing.go:148:healBucketMetadata.func1 xl-v1-healing.go:156:healBucketMetadata xl-v1-healing.go:322:quickHeal xl-v1.go:162:newXLObjects xl-v1.go:73:newXLObjectLayer server-main.go:260:newObjectLayer server-main.go:207:serverMain /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/app.go:499:github.com/minio/minio/vendor/github.com/minio/cli.HandleAction /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/command.go:214:github.com/minio/minio/vendor/github.com/minio/cli.Command.Run /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/app.go:260:github.com/minio/minio/vendor/github.com/minio/cli.(*App).Run main.go:145:Main /q/.q/sources/gopath/src/github.com/minio/minio/main.go:68:main.main /opt/go/src/runtime/proc.go:185:runtime.main /opt/go/src/runtime/asm_amd64.s:2337:runtime.goexit" 
time="2017-11-27T00:51:44Z" level=error msg="Unable to fetch disk info for &cmd.retryStorage{remoteStorage:(*cmd.networkStorage)(0xc42016ad50), maxRetryAttempts:1, retryUnit:1000000000, retryCap:30000000000, offline:false, offlineTimestamp:time.Time{wall:0xff11609, ext:63647340686, loc:(*time.Location)(nil)}}" cause="disk not found" source="[xl-v1.go:199:getDisksInfo()]" 
time="2017-11-27T00:51:45Z" level=error msg="Unable to fetch disk info for &cmd.retryStorage{remoteStorage:(*cmd.networkStorage)(0xc42016ad58), maxRetryAttempts:1, retryUnit:1000000000, retryCap:30000000000, offline:false, offlineTimestamp:time.Time{wall:0xff11609, ext:63647340686, loc:(*time.Location)(nil)}}" cause="disk not found" source="[xl-v1.go:199:getDisksInfo()]" 
time="2017-11-27T00:52:53Z" level=error msg="Unable to parse JWT token string" cause="Token used before issued" source="[jwt.go:103:isAuthTokenValid()]" 
time="2017-11-27T00:53:12Z" level=fatal msg="Unable to initialize XL object layer." cause="all stale disks had write errors during healing: " source="[xl-v1.go:74:newXLObjectLayer()]" stack="erasure-healfile.go:147:ErasureStorage.HealFile xl-v1-healing.go:469:healObject xl-v1-healing.go:148:healBucketMetadata.func1 xl-v1-healing.go:156:healBucketMetadata xl-v1-healing.go:322:quickHeal xl-v1.go:162:newXLObjects xl-v1.go:73:newXLObjectLayer server-main.go:260:newObjectLayer server-main.go:207:serverMain /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/app.go:499:github.com/minio/minio/vendor/github.com/minio/cli.HandleAction /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/command.go:214:github.com/minio/minio/vendor/github.com/minio/cli.Command.Run /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/app.go:260:github.com/minio/minio/vendor/github.com/minio/cli.(*App).Run main.go:145:Main /q/.q/sources/gopath/src/github.com/minio/minio/main.go:68:main.main /opt/go/src/runtime/proc.go:185:runtime.main /opt/go/src/runtime/asm_amd64.s:2337:runtime.goexit" 
time="2017-11-27T00:53:23Z" level=error msg="Unable to fetch disk info for &cmd.retryStorage{remoteStorage:(*cmd.networkStorage)(0xc420080790), maxRetryAttempts:1, retryUnit:1000000000, retryCap:30000000000, offline:false, offlineTimestamp:time.Time{wall:0x1fca5b2f, ext:63647340794, loc:(*time.Location)(nil)}}" cause="disk not found" source="[xl-v1.go:199:getDisksInfo()]" 
time="2017-11-27T00:53:31Z" level=error msg="Unable to fetch disk info for &cmd.retryStorage{remoteStorage:(*cmd.networkStorage)(0xc4200807c0), maxRetryAttempts:1, retryUnit:1000000000, retryCap:30000000000, offline:false, offlineTimestamp:time.Time{wall:0x1fca85ec, ext:63647340794, loc:(*time.Location)(nil)}}" cause="disk not found" source="[xl-v1.go:199:getDisksInfo()]" 
time="2017-11-27T00:54:52Z" level=fatal msg="Unable to initialize XL object layer." cause="all stale disks had write errors during healing: " source="[xl-v1.go:74:newXLObjectLayer()]" stack="erasure-healfile.go:147:ErasureStorage.HealFile xl-v1-healing.go:469:healObject xl-v1-healing.go:148:healBucketMetadata.func1 xl-v1-healing.go:156:healBucketMetadata xl-v1-healing.go:322:quickHeal xl-v1.go:162:newXLObjects xl-v1.go:73:newXLObjectLayer server-main.go:260:newObjectLayer server-main.go:207:serverMain /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/app.go:499:github.com/minio/minio/vendor/github.com/minio/cli.HandleAction /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/command.go:214:github.com/minio/minio/vendor/github.com/minio/cli.Command.Run /q/.q/sources/gopath/src/github.com/minio/minio/vendor/github.com/minio/cli/app.go:260:github.com/minio/minio/vendor/github.com/minio/cli.(*App).Run main.go:145:Main /q/.q/sources/gopath/src/github.com/minio/minio/main.go:68:main.main /opt/go/src/runtime/proc.go:185:runtime.main /opt/go/src/runtime/asm_amd64.s:2337:runtime.goexit" 

All my data is unavailable. I still see the fragments of the files in the data directories but am unable to access anything at all. This is a big problem.

Possible Solution

Steps to Reproduce (for bugs)

  1. Upgraded binary and restarted instances.
  2. Configuration got auto updated
  3. Got the aforementioned errors and web UI gives the error

Server not initialized, please try again

  1. Stuck.

Context

An upgrade. On startup i was told my version was 5 months old so i decided to upgrade and lost data.

Your Environment

  • Version used (minio version): (previous verson)
    Version: 2017-06-13T19:01:01Z
    Release-Tag: RELEASE.2017-06-13T19-01-01Z
    Commit-ID: 353f2d3
  • New Version
    Version: 2017-11-22T19:55:46Z
    Release-Tag: RELEASE.2017-11-22T19-55-46Z
    Commit-ID: d1a6c32
  • Operating System and version (uname -a):
    uname -a
    Linux data2 3.11.0-17-generic Extracting storage api to interface #31~precise1-Ubuntu SMP Tue Feb 4 21:25:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
@harshavardhana
Copy link
Member

The data is not lost here but instead you don't have enough quorum to read data. I think perhaps the issue is startup servers are not connecting properly.

@harshavardhana
Copy link
Member

How did you upgrade ? Did you attempt a rolling upgrade ? Rolling upgrade is not possible since minio requires under distributed setup all servers to be of same version.

From the logs it seems like there were errors related JWT and it also looks like there is problem with NTP. Nodes are perhaps skewed in time. To make sure that things are fine

  • Make sure to have a proper ntp configured across all servers
  • Once configured restart all the servers make sure to double verify that all versions on all nodes.
  • Run mc admin heal on the entire cluster to fix any discrepancies in the objects. This would heal partial objects if any.

@nitisht nitisht added this to the Next Release milestone Nov 27, 2017
@gavinmcnair
Copy link
Author

gavinmcnair commented Nov 27, 2017

The upgrade was done cold with all the Minio servers down. The servers were all ntp synchronised though 3 of the hosts (which sit on the same physical host rebooted and may have been slightly out of sync for a short period) which was the start of the issue.

Now 15 out of 16 of the hosts are up and and are within 30ms of each other

Here is the current ntp variance.

delay   offset  jitter
===============
 1.131    7.265   9.998
 1.073   -5.811  16.629
 1.042   -6.877   5.976
 1.093   30.058  13.438
 0.882   23.615   2.240
 1.008   27.249   3.146
 1.074   20.032   4.028
 1.144   -1.136  10.659
 1.030   -3.336  10.224
 1.182   37.614  18.567
 1.058   13.992   9.241
 1.125   27.563  26.161
 0.945    8.732   2.261
 1.057   29.912  14.397
 1.072    1.730   6.237

All servers are confirmed to be running

Version: 2017-11-22T19:55:46Z
Release-Tag: RELEASE.2017-11-22T19-55-46Z
Commit-ID: d1a6c32d800f1d5b703baad1f8aeede6cf2cdf48

Heal cannot be used since the server is not initialized

> mc admin heal minio/www
mc: <ERROR> Cannot heal bucket. Server not initialized, please try again.

I have tried another restart with the servers definitely in time sync.

I still see the JWT token string error and also

time="2017-11-27T13:10:11Z" level=fatal msg="Unable to initialize XL object layer." cause="Unable to initialize '.minio.sys' meta volume, Invalid token" source="[xl-v1.go:74:newXLObjectLayer()]"

I'm still stuck. Any more ideas?

@harshavardhana
Copy link
Member

Would it possible to access these nodes remotely ?

@gavinmcnair
Copy link
Author

gavinmcnair commented Nov 27, 2017

Definately. I could do a screenshare

@gavinmcnair
Copy link
Author

gavinmcnair commented Nov 27, 2017

I'll hang around in an appear.in room and should be available for the next 2-3 hours (today)

https://appear.in/gavinmcnair

@harshavardhana
Copy link
Member

After a lot of testing on a 16 node setup on packet.net - I am not able to reproduce the problem both with older releases and newer ones.

for i in $(cat packet-servers.txt); do ssh root@$i "systemctl status minio.service";done | tail -1
Nov 28 00:52:49 server-nvme9 minio[3791]: Status:         16 Online, 0 Offline. We can withstand [8] drive failure(s).

Only change here is that i am running on Ubuntu 16.04 perhaps you might be on a different operating system.

harshavardhana added a commit to harshavardhana/minio that referenced this issue Nov 28, 2017
@harshavardhana
Copy link
Member

I even tried with 10 different buckets too with each of them having policy.json in place, i am not able to see the JWT issue that you are having, neither the healing is failing. There is certainly something odd about the setup we are encountering on your end.

  • Would it possible to choose a fresh disk to compare against? like /data/minio/data to /data/minio/testing` ?
  • Would you mind running iperf numbers between all servers to get the baseline bandwidth the setup has?

@harshavardhana
Copy link
Member

After looking further into the token issue which is perhaps what i think is happening in your setup there is only one reason that it could have happened.

The actual verification is done by our jwt library

        if c.VerifyIssuedAt(now, false) == false {
                vErr.Inner = fmt.Errorf("Token used before issued")
                vErr.Errors |= ValidationErrorIssuedAt
        }

Which basically calls following function

func verifyIat(iat int64, now int64, required bool) bool {
        if iat == 0 {
                return !required
        }
        return now >= iat
}

the IssuedAt is set in our code when generating the JWT token for the time when a client tries to connect to other remote node. In this case our RPC client.

        utcNow := UTCNow()
        token := jwtgo.NewWithClaims(jwtgo.SigningMethodHS512, jwtgo.StandardClaims{
                ExpiresAt: utcNow.Add(expiry).Unix(),
                IssuedAt:  utcNow.Unix(), ---> Unix epoch time
                Subject:   accessKey,
        })

Similarly on the JWT library end it validates using Valid() function

// Validates time based claims "exp, iat, nbf".
// There is no accounting for clock skew.
// As well, if any of the above claims are not in the token, it will still
// be considered a valid claim.
func (c StandardClaims) Valid() error {
        vErr := new(ValidationError)
        now := TimeFunc().Unix() --> Unix epoch time.

Now by the logic these two epochs can only different if the remote node is running behind the server which generated the token. That is why the message that token is used before issued.

I still strongly think that there is some date problem on these machines, previous releases like 2017-06-13 had a different logging model which perhaps somehow worked, but newer release is exposing a bug in the setup.

Attaching a tool to run Go compiled program on your systems to look for unix timestamp unix.zip - Can you run this on your end let us confirm if the unix timestamp is reporting the right values?

Thanks - let me know if you are up for another remote session we can schedule it again at your convenience.

harshavardhana added a commit to harshavardhana/minio that referenced this issue Nov 28, 2017
@gavinmcnair
Copy link
Author

All machines reported an identical epoch time.

I'll run a proper iperf later though i notice the bandwidth is smaller than i'd expect between a single test host (Host has 2x10Gb bonded interface). There may be a subtle network misconfiguration though.

[  5]  0.0-10.0 sec  2.17 GBytes  1.86 Gbits/sec

@harshavardhana
Copy link
Member

All machines reported an identical epoch time.

I'll run a proper iperf later though i notice the bandwidth is smaller than i'd expect between a single test host (Host has 2x10Gb bonded interface). There may be a subtle network misconfiguration though.

We should perhaps sit on this again PST time, if you have time and see the issue through. There is something which doesn't add up in the entire configuration.

[ 5] 0.0-10.0 sec 2.17 GBytes 1.86 Gbits/sec

This number is expected if you ran two iperf's then it would show you a higher number i.e combining each.

@gavinmcnair
Copy link
Author

This has also been ran from the virtual machine and not from the underlying host. I'll take a look later at home since i have a lot to do today.

@gavinmcnair
Copy link
Author

gavinmcnair commented Nov 28, 2017

Happy to sit on this again later.

@gavinmcnair
Copy link
Author

gavinmcnair commented Nov 29, 2017

I copied all the data onto a single machine and used the newest Minio pointed directly at the directories. There was not a single error and the data came online straight away. (though it seem to think i have 2.8TB free out of 5.8TB which is definitely not right)

Does this point to the network issue you mentioned?

@harshavardhana
Copy link
Member

I copied all the data onto a single machine and used the newest Minio pointed directly at the directories. There was not a single error and the data came online straight away.

Yes that i would expect that, you could even try on a different data partition like a temporary path such as /data/minio/tmp

(though it seem to think i have 2.8TB free out of 5.8TB which is definitely not right)

This would be something related to the df -h output @gavinmcnair

Does this point to the network issue you mentioned?

Yes it must be a network problem, there is some sort of a packet delay which causes this. But i am not really sure at this point what might be causing this. In my testing i am not able to reproduce it anyways.

We can take a look again if you have time, also please reach us on https://slack.minio.io we can perhaps a more quick discussion on this - thanks

harshavardhana added a commit to harshavardhana/minio that referenced this issue Nov 30, 2017
harshavardhana added a commit to harshavardhana/minio that referenced this issue Dec 1, 2017
@harshavardhana
Copy link
Member

We can take a look again if you have time, also please reach us on https://slack.minio.io we can perhaps a more quick discussion on this - thanks

To continue this discussion further would it be possible for you to join our slack channel?. Let me know when you wish to continue this discussion.

harshavardhana added a commit to harshavardhana/minio that referenced this issue Dec 2, 2017
harshavardhana added a commit to harshavardhana/minio that referenced this issue Dec 4, 2017
harshavardhana added a commit to harshavardhana/minio that referenced this issue Dec 7, 2017
harshavardhana added a commit to harshavardhana/minio that referenced this issue Dec 7, 2017
harshavardhana added a commit to harshavardhana/minio that referenced this issue Dec 8, 2017
harshavardhana added a commit to harshavardhana/minio that referenced this issue Dec 11, 2017
@vadmeste
Copy link
Member

vadmeste commented Jan 1, 2018

I also see this error sometimes when I test 4 virtualbox VMs using Vagrant in my local machine.

This PR dgrijalva/jwt-go#139 wants to add an option to configure a tolerated skew time between server and client.

This RFC, http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html#expDef

Implementers MAY provide for some small leeway, usually no more than a few minutes, to account for clock skew

(as seen in the PR discussion)

I wasn't able to test, but I guess the following diff can fix the problem:

diff --git a/cmd/jwt.go b/cmd/jwt.go
index e8da488d..413fef2a 100644
--- a/cmd/jwt.go
+++ b/cmd/jwt.go
@@ -47,6 +47,16 @@ var (
        errNoAuthToken          = errors.New("JWT token missing")
 )
 
+type relaxedJWTClaims struct {
+       *jwtgo.StandardClaims
+       // Tolerated clock skew time for IssuedAt time, in seconds
+       leeway int64
+}
+
+func (c *relaxedJWTClaims) VerifyIssuedAt(cmp int64, req bool) bool {
+       return c.StandardClaims.VerifyIssuedAt(cmp-c.leeway, req)
+}
+
 func authenticateJWT(accessKey, secretKey string, expiry time.Duration) (string, error) {
        passedCredential, err := auth.CreateCredentials(accessKey, secretKey)
        if err != nil {
@@ -97,7 +107,7 @@ func isAuthTokenValid(tokenString string) bool {
        if tokenString == "" {
                return false
        }
-       var claims jwtgo.StandardClaims
+       var claims = relaxedJWTClaims{leeway: 60}
        jwtToken, err := jwtgo.ParseWithClaims(tokenString, &claims, keyFuncCallback)
        if err != nil {
                errorIf(err, "Unable to parse JWT token string")

@harshavardhana
Copy link
Member

That looks like an interesting fix @vadmeste can you test and confirm if the fix works?

@harshavardhana
Copy link
Member

That looks like an interesting fix @vadmeste can you test and confirm if the fix works?

@vadmeste i looked at this further - IssuedAt never does an exact match, the matching is true if the current time is greater than or = the issued time. So the leeway fix is not needed IMO and might not be the reason why this issue occurred.

func verifyIat(iat int64, now int64, required bool) bool {
	if iat == 0 {
		return !required
	}
	return now >= iat
}

@harshavardhana
Copy link
Member

This RFC, http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html#expDef

This is for VerifyExpiresAt not VerifyIssuedAt issued at should be verified as above @vadmeste

@vadmeste
Copy link
Member

vadmeste commented Jan 2, 2018

Yeah, I am still doing a little investigation.. I was able to reproduce it with my Vagrant setup.

@vadmeste
Copy link
Member

vadmeste commented Jan 2, 2018

Running two VMs in my machine (vagrant+virtualbox) shows the problem.

I use the following code which prints the time with high precision + epoch time

package main

import (
	"fmt"
	"time"
)

func main() {
	for i := 0; i < 10; i++ {
		t := time.Now()
		fmt.Print(t, t.Unix())
		fmt.Println("")
		time.Sleep(50 * time.Millisecond)
	}
}

I have two terminals logged to VM1 and VM2, if I run the above code in VM1 and then manually and quickly in VM2, I can see that sometimes VM2 shows earlier time.

VM1:
2018-01-02 20:57:22.736160179 +0000 UTC m=+0.002691410 1514926642

VM2:
2018-01-02 20:57:22.685983687 +0000 UTC m=+0.003038131 1514926642

It seems that time in VMs can skew sometimes

@gavinmcnair
Copy link
Author

Is there some hint in the code differences between the 2 releases above since i do not get the problem with the older version of code but do with the latest version.

Incidentally. After setting up the latest version of minio and mirroring all my data into it I no longer see any issues on the new cluster so it might be that some time related artefact might get saved to the filesystem and cause problems with machines which are in sync after the problem happened (all our machines were very much in sync while experiencing the problem)

@harshavardhana
Copy link
Member

harshavardhana commented Jan 5, 2018

Older version server was generating the token. The window of this to occur was lesser, now that we don't transmit access keys over network client generates a token and gets it validated from server - this issue is more visible. Client and Server are Minio servers themselves talking to each other. The timing issue of virtual machines is outside of this problem anyways.

JWT thinks that is unexpected since according to them token issued and then used later within expiry should be on a entity for which the time has progressed monotonically. This expectation is a bit problematic when the precision rounding happens when only seconds are picked for epoch. A 50ms difference causes top level seconds to vary around a second due to rounding .. will read up more and see if we can use nano seconds instead for better precision here. The JWT spec is not explicit about this..

@harshavardhana
Copy link
Member

jpadilla/pyjwt#190 issue explains this frustration in more detail , looks like JWT RFC has gone back and forth on this

https://tools.ietf.org/id/draft-jones-json-web-token-04.html (Draft) expired in 2011

The iat (issued at) claim identifies the UTC time at which the JWT was issued. The processing of the iat claim requires that the current date/time MUST be after the issued date/time listed in the iat claim. Implementers MAY provide for some small leeway, usually no more than a few minutes, to account for clock skew. This claim is OPTIONAL.

https://tools.ietf.org/html/rfc7519#page-10 (Proposed) - doesn't mention any leeway regarding IAT but missing in official proposal.

Looking a PyJWT implementation a certain leeway is provided, perhaps we should bring that in as well. So i guess @vadmeste proposal is okay in this scenario, we can implement our own claims wrapping over StandardClaims would work fine.

@harshavardhana
Copy link
Member

Let me add a new implementation as leewayClaims which adds leeways for expiry, iat together.

harshavardhana added a commit to harshavardhana/minio that referenced this issue Jan 5, 2018
Remove the requirement for IssuedAt claims from JWT
for now, since we do not currently have a way to provide
a leeway window for validating the claims. Expiry does
the same checks as IssuedAt with an expiry window.

We do not need it right now since we have skewing check
in our RPC layer to handle this correctly.

rpc-common.go
```
func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
        utcNow := UTCNow()
        return !(requestTime.Sub(utcNow) > rpcSkewTimeAllowed ||
                utcNow.Sub(requestTime) > rpcSkewTimeAllowed)
}
```

Once the PR upstream is merged dgrijalva/jwt-go#139
We can bring in support for leeway later.

Fixes minio#5237
harshavardhana added a commit to harshavardhana/minio that referenced this issue Jan 5, 2018
Remove the requirement for IssuedAt claims from JWT
for now, since we do not currently have a way to provide
a leeway window for validating the claims. Expiry does
the same checks as IssuedAt with an expiry window.

We do not need it right now since we have clock skew check
in our RPC layer to handle this correctly.

rpc-common.go
```
func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
        utcNow := UTCNow()
        return !(requestTime.Sub(utcNow) > rpcSkewTimeAllowed ||
                utcNow.Sub(requestTime) > rpcSkewTimeAllowed)
}
```

Once the PR upstream is merged dgrijalva/jwt-go#139
We can bring in support for leeway later.

Fixes minio#5237
harshavardhana added a commit to harshavardhana/minio that referenced this issue Jan 5, 2018
Remove the requirement for IssuedAt claims from JWT
for now, since we do not currently have a way to provide
a leeway window for validating the claims. Expiry does
the same checks as IssuedAt with an expiry window.

We do not need it right now since we have clock skew check
in our RPC layer to handle this correctly.

rpc-common.go
```
func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
        utcNow := UTCNow()
        return !(requestTime.Sub(utcNow) > rpcSkewTimeAllowed ||
                utcNow.Sub(requestTime) > rpcSkewTimeAllowed)
}
```

Once the PR upstream is merged dgrijalva/jwt-go#139
We can bring in support for leeway later.

Fixes minio#5237
harshavardhana added a commit to harshavardhana/minio that referenced this issue Jan 5, 2018
Remove the requirement for IssuedAt claims from JWT
for now, since we do not currently have a way to provide
a leeway window for validating the claims. Expiry does
the same checks as IssuedAt with an expiry window.

We do not need it right now since we have clock skew check
in our RPC layer to handle this correctly.

rpc-common.go
```
func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
        utcNow := UTCNow()
        return !(requestTime.Sub(utcNow) > rpcSkewTimeAllowed ||
                utcNow.Sub(requestTime) > rpcSkewTimeAllowed)
}
```

Once the PR upstream is merged dgrijalva/jwt-go#139
We can bring in support for leeway later.

Fixes minio#5237
harshavardhana added a commit to harshavardhana/minio that referenced this issue Jan 9, 2018
Remove the requirement for IssuedAt claims from JWT
for now, since we do not currently have a way to provide
a leeway window for validating the claims. Expiry does
the same checks as IssuedAt with an expiry window.

We do not need it right now since we have clock skew check
in our RPC layer to handle this correctly.

rpc-common.go
```
func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
        utcNow := UTCNow()
        return !(requestTime.Sub(utcNow) > rpcSkewTimeAllowed ||
                utcNow.Sub(requestTime) > rpcSkewTimeAllowed)
}
```

Once the PR upstream is merged dgrijalva/jwt-go#139
We can bring in support for leeway later.

Fixes minio#5237
harshavardhana added a commit to harshavardhana/minio that referenced this issue Jan 10, 2018
Remove the requirement for IssuedAt claims from JWT
for now, since we do not currently have a way to provide
a leeway window for validating the claims. Expiry does
the same checks as IssuedAt with an expiry window.

We do not need it right now since we have clock skew check
in our RPC layer to handle this correctly.

rpc-common.go
```
func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
        utcNow := UTCNow()
        return !(requestTime.Sub(utcNow) > rpcSkewTimeAllowed ||
                utcNow.Sub(requestTime) > rpcSkewTimeAllowed)
}
```

Once the PR upstream is merged dgrijalva/jwt-go#139
We can bring in support for leeway later.

Fixes minio#5237
kannappanr pushed a commit that referenced this issue Jan 10, 2018
Remove the requirement for IssuedAt claims from JWT
for now, since we do not currently have a way to provide
a leeway window for validating the claims. Expiry does
the same checks as IssuedAt with an expiry window.

We do not need it right now since we have clock skew check
in our RPC layer to handle this correctly.

rpc-common.go
```
func isRequestTimeAllowed(requestTime time.Time) bool {
        // Check whether request time is within acceptable skew time.
        utcNow := UTCNow()
        return !(requestTime.Sub(utcNow) > rpcSkewTimeAllowed ||
                utcNow.Sub(requestTime) > rpcSkewTimeAllowed)
}
```

Once the PR upstream is merged dgrijalva/jwt-go#139
We can bring in support for leeway later.

Fixes #5237
@lock
Copy link

lock bot commented Apr 26, 2020

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Apr 26, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants