Questions tagged [meta-learning]

21 questions
4
votes
1 answer

How to search for an optimal dithering pattern?

I'm trying to find an optimal dithering pattern which can be used as a threshold on a greyscale image to generate a 1 bit black and white image. Ideally it would be optimal in the sense that a human would judge it perceptually most close to the…
Alan Wolfe
  • 173
  • 6
3
votes
0 answers

How to feed the input to a Memory Augmented Neural Network (MAAN) to do one shot learning?

In this paper by Deep-Mind on one shot learning they have published an architecture explaining how the system works with an external meory. I understand the mechanism perfectly. But what I don't understand is how they feed the data in to the…
2
votes
1 answer

How to implement my own loss function for Prototype learning using Keras Model

I'm trying to migrate this code, "Omniglot Character Set Classification Using Prototypical Network", into Tensorflow 2.1.0 and Keras 2.3.1. My problem is about how to use euclidean distance between train data and validation data. Look at this…
VansFannel
  • 279
  • 1
  • 11
2
votes
0 answers

Meta Learning: how to train a model with Support Set and Query Set

I've just started to learn Meta Learning reading the book Hands-On Meta Learning with Python. I think I know the answer for my question, but I'm a little confuse about how to implement the algorithm with Keras. This piece of code is from an example…
VansFannel
  • 279
  • 1
  • 11
2
votes
1 answer

Automatically uses several cores on R

I am using a library called MFE to generate meta-features. However, I am working right now with several files and I notice that I am using only 1 core of my machine and taking too much time. I have been trying to implement some libraries as I saw…
1
vote
0 answers

Stacking - Appropriate base and meta models

When implementing stacking for model building and prediction (For example using sklearn's StackingRegressor function) what is the appropriate choice of models for the base models and final meta model? Should weak/linear models be used as the base…
1
vote
1 answer

How to optimize hyperparameters in stacked model?

I was wondering whether somebody could explain how to optimize hyperparameters for the base learners and meta algorithm when stacking? In many tutorials they seem to be plucked out of thin air! Thanks, Jack
1
vote
0 answers

Are there any Meta Knowledge bank available?

What resources do you use to learn meta knowledge ? By meta knowledge, I mean generalized information that will help us take more informed decisions when working on a problem later. Example of meta knowledge: Lots of time series data ? Build a…
1
vote
0 answers

Isn't the optimizer network in deepminds learning to learn a DRQN?

In the paper "Learning to learn by gradient descent by gradient descent" they describe an RNN which learns gradient transformation to learn an optimizer. The optimizer network directly interacts with the environment to take actions, $\theta_{t+1} =…
1
vote
1 answer

Can clustering my data first help me learn better classifiers?

I was thinking about this lately. Let's say that we have a very complex space, which makes it hard to learn a classifier that can efficiently split it. But what if this very complex space is actually made up of a bunch of "simple" subspaces. By…
1
vote
0 answers

Meta-learning libraries that work on Google Colab in 2025 (Python 3.11.11)?

learn2learn (https://github.com/learnables/learn2learn, https://pypi.org/project/learn2learn/, http://slack.learn2learn.net/) used to work just fine until 2024, but Google Colab seems to have updated its Python version to 3.11.11, which seems to…
Pablo Messina
  • 197
  • 1
  • 3
  • 11
1
vote
0 answers

Can OpenAI's CLIP Model or DeepMind's Flamingo Model Predict Classes Truly Never Before Seen for Zero- or Few-Shot Learning?

One type of statement about zero-shot and few-shot learning in the literature I continually come across is that these models can predict new unseen classes at inference time for which they were never trained on. However, such sources typically do…
user141493
  • 361
  • 1
  • 4
  • 9
1
vote
0 answers

Strong bias from Linear SVR meta model

I have built nine meta models based on the model stacking principle, which I compare to a reference model for a number of time series. See the results below. The 22 base models that are trained on 70% of the training data produce forecasts on the…
Tim Stack
  • 121
  • 3
1
vote
0 answers

Why would a Linear SVR model greatly outperform a Linear Regression model on model stacking

I have built nine meta models based on the model stacking principle, which I compare to a reference model for a number of time series. See the results below. The 22 base models that are trained on 70% of the training data produce forecasts on the…
Tim Stack
  • 121
  • 3
1
vote
1 answer

Is ensemble learning a subset of meta learning?

I'm studying ensemble learning methods, focusing on random forest and gradient boost. I read this article about this topic and this about meta-learning. It is possible to say that ensemble learning is a subset of meta-learning?
Inuraghe
  • 491
  • 4
  • 19
1
2