You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for sharing the good work.
I was reproducing the results on ImageNet1k using the hyperparameters settings provided on the repo but I am not able to reproduce the results of Acc1: 67.9 for SeaFormer_T. Instead I get the Acc1 value as 66.23.
Here are the parameter values from the log file
Could you please tell me if the parameters values are correct and what am I doing wrong here. As recommended in repo I utilized 8 GPUs but I am not able to reproduce the results. Same is the case with SeaFormer_S.
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for sharing the good work.
I was reproducing the results on ImageNet1k using the hyperparameters settings provided on the repo but I am not able to reproduce the results of Acc1: 67.9 for SeaFormer_T. Instead I get the Acc1 value as 66.23.
Here are the parameter values from the log file
aa: rand-m9-mstd0.5
amp: true
apex_amp: false
aug_repeats: 0
aug_splits: 0
batch_size: 128
bce_loss: false
bce_target_thresh: null
bn_eps: null
bn_momentum: null
channels_last: false
checkpoint_hist: 10
class_map: ''
clip_grad: null
clip_mode: norm
color_jitter: 0.4
cooldown_epochs: 10
crop_pct: null
cutmix: 0.0
cutmix_minmax: null
data_dir: ./imagenet_1k
dataset: ''
dataset_download: false
decay_epochs: 2.4
decay_rate: 0.973
dist_bn: reduce
drop: 0.2
drop_block: null
drop_connect: 0.2
drop_path: null
epoch_repeats: 0.0
epochs: 600
eval_metric: top1
experiment: SeaFormer_T
fuser: ''
gp: null
grad_checkpointing: false
hflip: 0.5
img_size: 224
initial_checkpoint: ''
input_size:
interpolation: ''
jsd_loss: false
layer_decay: null
local_rank: 0
log_interval: 50
log_wandb: false
lr: 0.064
lr_cycle_decay: 0.5
lr_cycle_limit: 1
lr_cycle_mul: 1.0
lr_k_decay: 1.0
lr_noise:
lr_noise_pct: 0.67
lr_noise_std: 1.0
mean: null
min_lr: 1.0e-06
mixup: 0.0
mixup_mode: batch
mixup_off_epoch: 0
mixup_prob: 1.0
mixup_switch_prob: 0.5
model: SeaFormer_T
model_ema: true
model_ema_decay: 0.9999
model_ema_force_cpu: false
momentum: 0.9
native_amp: false
no_aug: false
no_ddp_bb: false
no_prefetcher: false
no_resume_opt: false
num_classes: 1000
opt: adamw
opt_betas: null
opt_eps: 0.001
output: ./output_dir
patience_epochs: 10
pin_mem: false
pretrained: false
ratio:
recount: 1
recovery_interval: 0
remode: pixel
reprob: 0.2
resplit: false
resume: ''
save_images: false
scale:
sched: cosine
seed: 42
smoothing: 0.1
split_bn: false
start_epoch: null
std: null
sync_bn: false
torchscript: false
train_interpolation: random
train_split: train
tta: 0
use_multi_epochs_loader: false
val_split: validation
validation_batch_size: null
vflip: 0.0
warmup_epochs: 10
warmup_lr: 1.0e-06
weight_decay: 2.0e-05
worker_seeding: all
workers: 7
Could you please tell me if the parameters values are correct and what am I doing wrong here. As recommended in repo I utilized 8 GPUs but I am not able to reproduce the results. Same is the case with SeaFormer_S.
The text was updated successfully, but these errors were encountered: